Query Text
stringlengths
10
40.4k
Ranking 1
stringlengths
12
40.4k
Ranking 2
stringlengths
12
36.2k
Ranking 3
stringlengths
10
36.2k
Ranking 4
stringlengths
13
40.4k
Ranking 5
stringlengths
12
36.2k
Ranking 6
stringlengths
13
36.2k
Ranking 7
stringlengths
10
40.4k
Ranking 8
stringlengths
12
36.2k
Ranking 9
stringlengths
12
36.2k
Ranking 10
stringlengths
12
36.2k
Ranking 11
stringlengths
20
6.21k
Ranking 12
stringlengths
14
8.24k
Ranking 13
stringlengths
28
4.03k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.25
score_4
float64
0
0.25
score_5
float64
0
0.25
score_6
float64
0
0.25
score_7
float64
0
0.24
score_8
float64
0
0.2
score_9
float64
0
0.03
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Unsupervised AER Object Recognition Based on Multiscale Spatio-Temporal Features and Spiking Neurons This article proposes an unsupervised address event representation (AER) object recognition approach. The proposed approach consists of a novel multiscale spatio-temporal feature (MuST) representation of input AER events and a spiking neural network (SNN) using spike-timing-dependent plasticity (STDP) for object recognition with MuST. MuST extracts the features contained in both the spatial and temporal information of AER event flow, and forms an informative and compact feature spike representation. We show not only how MuST exploits spikes to convey information more effectively, but also how it benefits the recognition using SNN. The recognition process is performed in an unsupervised manner, which does not need to specify the desired status of every single neuron of SNN, and thus can be flexibly applied in real-world recognition tasks. The experiments are performed on five AER datasets including a new one named GESTURE-DVS. Extensive experimental results show the effectiveness and advantages of the proposed approach.
DART: Distribution Aware Retinal Transform for Event-Based Cameras We introduce a generic visual descriptor, termed as distribution aware retinal transform (DART), that encodes the structural context using log-polar grids for event cameras. The DART descriptor is applied to four different problems, namely object classification, tracking, detection and feature matching: (1) The DART features are directly employed as local descriptors in a bag-of-words classification framework and testing is carried out on four standard event-based object datasets (N-MNIST, MNIST-DVS, CIFAR10-DVS, NCaltech-101); (2) Extending the classification system, tracking is demonstrated using two key novelties: (i) Statistical bootstrapping is leveraged with online learning for overcoming the low-sample problem during the one-shot learning of the tracker, (ii) Cyclical shifts are induced in the log-polar domain of the DART descriptor to achieve robustness to object scale and rotation variations; (3) To solve the long-term object tracking problem, an object detector is designed using the principle of cluster majority voting. The detection scheme is then combined with the tracker to result in a high intersection-over-union score with augmented ground truth annotations on the publicly available event camera dataset; (4) Finally, the event context encoded by DART greatly simplifies the feature correspondence problem, especially for spatio-temporal slices far apart in time, which has not been explicitly tackled in the event-based vision domain.
A Spike-event-based Neuromorphic Processor with Enhanced On-chip STDP Learning in 28nm CMOS Event-based spiking neural network (SNN) has displayed a promising prospect to realize real-time, efficient and intelligent hardware platforms. Whereas great efforts are still being appealed to explore the possibility of introducing online learning abilities to neuromorphic systems. In this paper, a 28-nm CMOS neuromorphic processor is presented, fulfilling online learning by adopting counter and ...
A 28.2 μC Neuromorphic Sensing System Featuring SNN-based Near-sensor Computation and Event-Driven Body-Channel Communication for Insertable Cardiac Monitoring This paper presents an event-driven neuromorphic sensing system capable of performing on-chip feature extraction and “send-on-delta” transmission for insertable cardiac monitoring. A background offset calibration improves the SNDR of clockless level-crossing ADCs. A fully synthesized spiking neural network extracts full ECG PQRST features with $\lt 1$ ms time precision. An event-driven body channe...
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Fully integrated wideband high-current rectifiers for inductively powered devices This paper describes the design and implementation of fully integrated rectifiers in BiCMOS and standard CMOS technologies for rectifying an externally generated RF carrier signal in inductively powered wireless devices, such as biomedical implants, radio-frequency identification (RFID) tags, and smartcards to generate an on-chip dc supply. Various full-wave rectifier topologies and low-power circuit design techniques are employed to decrease substrate leakage current and parasitic components, reduce the possibility of latch-up, and improve power transmission efficiency and high-frequency performance of the rectifier block. These circuits are used in wireless neural stimulating microsystems, fabricated in two processes: the University of Michigan's 3-μm 1M/2P N-epi BiCMOS, and the AMI 1.5-μm 2M/2P N-well standard CMOS. The rectifier areas are 0.12-0.48 mm2 in the above processes and they are capable of delivering >25mW from a receiver coil to the implant circuitry. The performance of these integrated rectifiers has been tested and compared, using carrier signals in 0.1-10-MHz range.
Standards for XML and Web Services Security XML schemas convey the data syntax and semantics for various application domains, such as business-to-business transactions, medical records, and production status reports. However, these schemas seldom address security issues, which can lead to a worst-case scenario of systems and protocols with no security at all. At best, they confine security to transport level mechanisms such as secure sockets layer (SSL). On the other hand, the omission of security provisions from domain schemas opens the way for generic security specifications based on XML document and grammar extensions. These specifications are orthogonal to domain schemas but integrate with them to support a variety of security objectives, such as confidentiality, integrity, and access control. In 2002, several specifications progressed toward providing a comprehensive standards framework for secure XML-based applications. The paper shows some of the most important specifications, the issues they address, and their dependencies.
Random walks in peer-to-peer networks: algorithms and evaluation We quantify the effectiveness of random walks for searching and construction of unstructured peer-to-peer (P2P) networks. We have identified two cases where the use of random walks for searching achieves better results than flooding: (a) when the overlay topology is clustered, and (b) when a client re-issues the same query while its horizon does not change much. Related to the simulation of random walks is also the distributed computation of aggregates, such as averaging. For construction, we argue that an expander can be maintained dynamically with constant operations per addition. The key technical ingredient of our approach is a deep result of stochastic processes indicating that samples taken from consecutive steps of a random walk on an expander graph can achieve statistical properties similar to independent sampling. This property has been previously used in complexity theory for construction of pseudorandom number generators. We reveal another facet of this theory and translate savings in random bits to savings in processing overhead.
CoCo: coding-based covert timing channels for network flows In this paper, we propose CoCo, a novel framework for establishing covert timing channels. The CoCo covert channel modulates the covert message in the inter-packet delays of the network flows, while a coding algorithm is used to ensure the robustness of the covert message to different perturbations. The CoCo covert channel is adjustable: by adjusting certain parameters one can trade off different features of the covert channel, i.e., robustness, rate, and undetectability. By simulating the CoCo covert channel using different coding algorithms we show that CoCo improves the covert robustness as compared to the previous research, while being practically undetectable.
CORDIC-based computation of ArcCos and ArcSin CORDIC--based algorithms to compute cos^{-1}(t), sin^{-1}(t) and sqrt{1-t^{2}} are proposed. The implementation requires a standard CORDIC module plus a module to compute the direction of rotation, this being the same hardware required for the extended CORDIC vectoring, recently proposed by the authors. Although these functions can be obtained as a special case of this extended vectoring, the specific algorithm we propose here presents two significant improvements: (1) it achieves an angle granularity of 2^{-n} using the same datapath width as the standard CORDIC algorithm (about n bits, instead of about 2n which would be required using the extended vectoring), and (2) no repetitions of iterations are needed. The proposed algorithm is compatible with the extended vectoring and, in contrast with previous implementations, the number of iterations and the delay of each iteration are the same as for the conventional CORDIC algorithm.
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
0
0
HCP: A Flexible CNN Framework for Multi-label Image Classification. Convolutional Neural Network (CNN) has demonstrated promising performance in single-label image classification tasks. However, how CNN best copes with multi-label images still remains an open problem, mainly due to the complex underlying object layouts and insufficient multi-label training images. In this work, we propose a flexible deep CNN infrastructure, called Hypotheses-CNN-Pooling (HCP), whe...
A spiking neuromorphic design with resistive crossbar Neuromorphic systems recently gained increasing attention for their high computation efficiency. Many designs have been proposed and realized with traditional CMOS technology or emerging devices. In this work, we proposed a spiking neuromorphic design built on resistive crossbar structures and implemented with IBM 130nm technology. Our design adopts a rate coding scheme where pre- and post-neuron signals are represented by digitalized pulses. The weighting function of pre-neuron signals is executed on the resistive crossbar in analog format. The computing result is transferred into digitalized output spikes via an integrate-and-fire circuit (IFC) as the post-neuron. We calibrated the computation accuracy of the entire system through circuit simulations. The results demonstrated a good match to our analytic modeling. Furthermore, we implemented both feedforward and Hopfield networks by utilizing the proposed neuromorphic design. The system performance and robustness were studied through massive Monte-Carlo simulations based on the application of digital image recognition. Comparing to the previous crossbar-based computing engine that represents data with voltage amplitude, our design can achieve >50% energy savings, while the average probability of failed recognition increase only 1.46% and 5.99% in the feedforward and Hopfield implementations, respectively.
On-Chip Memory Technology Design Space Explorations for Mobile Deep Neural Network Accelerators Deep neural network (DNN) inference tasks have become ubiquitous workloads on mobile SoCs and demand energy-efficient hardware accelerators. Mobile DNN accelerators are heavily area-constrained, with only minimal on-chip SRAM, which results in heavy use of inefficient off-chip DRAM. With diminishing returns from conventional silicon technology scaling, emerging memory technologies that offer better area density than SRAM can boost accelerator efficiency by minimizing costly off-chip DRAM accesses. This paper presents a detailed design space exploration (DSE) of technology-system co-design for systolic-array accelerators. We focus on practical/mature on-chip memory technologies, including SRAM, eDRAM, MRAM, and 3D vertical RRAM (VRRAM). The DSE employs state-of-the-art optimizations (e.g., model compression and optimized buffer scheduling), and evaluates results on important models including ResNet-50, MobileNet, and Faster-RCNN. Compared to an SRAM/DRAM baseline, MRAM-based accelerators show up to 4.68× energy benefits (57% area overhead), while a 3D VRRAM-based design achieves 2.22× energy benefits (33% area reduction).
A 7-nm Compute-in-Memory SRAM Macro Supporting Multi-Bit Input, Weight and Output and Achieving 351 TOPS/W and 372.4 GOPS In this work, we present a compute-in-memory (CIM) macro built around a standard two-port compiler macro using foundry 8T bit-cell in 7-nm FinFET technology. The proposed design supports 1024 4 b <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> 4 b multiply-and-accumulate (MAC) computations simultaneously. The 4-bit input is represented by the number of read word-line (RWL) pulses, while the 4-bit weight is realized by charge sharing among binary-weighted computation caps. Each unit of computation cap is formed by the inherent cap of the sense amplifier (SA) inside the 4-bit Flash ADC, which saves area and minimizes kick-back effect. Access time is 5.5 ns with 0.8-V power supply at room temperature. The proposed design achieves energy efficiency of 351 TOPS/W and throughput of 372.4 GOPS. Implications of our design from neural network implementation and accuracy perspectives are also discussed.
RRAM for Compute-in-Memory: From Inference to Training To efficiently deploy machine learning applications to the edge, compute-in-memory (CIM) based hardware accelerator is a promising solution with improved throughput and energy efficiency. Instant-on inference is further enabled by emerging non-volatile memory technologies such as resistive random access memory (RRAM). This paper reviews the recent progresses of the RRAM based CIM accelerator desig...
Challenges and Trends of SRAM-Based Computing-In-Memory for AI Edge Devices When applied to artificial intelligence edge devices, the conventionally von Neumann computing architecture imposes numerous challenges (e.g., improving the energy efficiency), due to the memory-wall bottleneck involving the frequent movement of data between the memory and the processing elements (PE). Computing-in-memory (CIM) is a promising candidate approach to breaking through this so-called m...
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
Broadband MIMO-OFDM Wireless Communications Orthogonal frequency division multiplexing (OFDM) is a popular method for high data rate wireless transmission. OFDM may be combined with antenna arrays at the transmitter and receiver to increase the diversity gain and/or to enhance the system capacity on time-varying and frequency-selective channels, resulting in a multiple-input multiple-output (MIMO) configuration. The paper explores various p...
On the minimal synchronism needed for distributed consensus Reaching agreement is a primitive of distributed computing. While this poses no problem in an ideal, failure-free environment, it imposes certain constraints on the capabilities of an actual system: a system is viable only if it permits the existence of consensus protocols tolerant to some number of failures. Fischer, Lynch and Paterson [FLP] have shown that in a completely asynchronous model, even one failure cannot be tolerated. In this paper we extend their work, identifying several critical system parameters, including various synchronicity conditions, and examine how varying these affects the number of faults which can be tolerated. Our proofs expose general heuristic principles that explain why consensus is possible in certain models but not possible in others.
Type-2 fuzzy sets and systems: an overview This paper provides an introduction to and an overview of type-2 fuzzy sets (T2 FS) and systems. It does this by answering the following questions: What is a T2 FS and how is it different from a T1 FS? Is there new terminology for a T2 FS? Are there important representations of a T2 FS and, if so, why are they important? How and why are T2 FSs used in a rule-based system? What are the detailed computations for an interval T2 fuzzy logic system (IT2 FLS) and are they easy to understand? Is it possible to have an IT2 FLS without type reduction? How do we wrap this up and where can we go to learn more?
RockSalt: better, faster, stronger SFI for the x86 Software-based fault isolation (SFI), as used in Google's Native Client (NaCl), relies upon a conceptually simple machine-code analysis to enforce a security policy. But for complicated architectures such as the x86, it is all too easy to get the details of the analysis wrong. We have built a new checker that is smaller, faster, and has a much reduced trusted computing base when compared to Google's original analysis. The key to our approach is automatically generating the bulk of the analysis from a declarative description which we relate to a formal model of a subset of the x86 instruction set architecture. The x86 model, developed in Coq, is of independent interest and should be usable for a wide range of machine-level verification tasks.
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
Current-mode adaptively hysteretic control for buck converters with fast transient response and improved output regulation This paper presents a current-mode adaptively hysteretic control (CMAHC) technique to achieve the fast transient response for DC-DC buck converters. A complementary full range current sensor comprising of chargingpath and discharging-path sensing transistors is proposed to track the inductor current seamlessly. With the proposed current-mode adaptively hysteretic topology, the inductor current is continuously monitored, and the adaptively hysteretic threshold is dynamically adjusted according to the feedback information comes from the output voltage level. Therefore, a fast load-transient response can be achieved. Besides, the output regulation performance is also improved by the proposed dynamic current-scaling circuitry (DCSC). Moreover, the proposed CMAHC topology can be used in a nearly zero RESR design configuration. The prototype fabricated using TSMC 0.25μm CMOS process occupies the area of 1.78mm2 including all bonding pads. Experimental results show that the output voltage ripple is smaller than 30mV over a wide loading current from 0 mA to 500 mA with maximum power conversion efficiency higher than 90%. The recovery time from light to heavy load (100 to 500 mA) is smaller than 5μs.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.2
0.2
0.2
0.1
0.066667
0.018182
0
0
0
0
0
0
0
0
A MIMO decoder accelerator for next generation wireless communications In this paper, we present a multi-input-multi-output (MIMO) decoder accelerator architecture that offers versatility and reprogrammability while maintaining a very high performance-cost metric. The accelerator is meant to address the MIMO decoding bottlenecks associated with the convergence of multiple high-speed wireless standards onto a single device. It is scalable in the number of antennas, bandwidth, modulation format, and most importantly, present and emerging decoder algorithms. It features a Harvard-like architecture with complex vector operands and a deeply pipelined fixed-point complex arithmetic processing unit. When implemented on a Xilinx Virtex-4 LX200FF1513 field-programmable gate array (FPGA), the design occupied 43% of overall FPGA resources. The accelerator shows an advantage of up to three orders of magnitude (1000 times) in power-delay product for typical MIMO decoding operations relative to a general purpose DSP. When compared to dedicated application-specific IC (ASIC) implementations of mmse MIMO decoders, the accelerator showed a degradation of 340%-17%, depending on the actual ASIC being considered. In order to optimize the design for both speed and area, specific challenges had to be overcome. These include: definition of the processing units and their interconnection; proper dynamic scaling of the signal; and memory partitioning and parallelism.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
A 0.5 V 1.1 MS/sec 6.3 fJ/Conversion-Step SAR-ADC With Tri-Level Comparator in 40 nm CMOS This paper presents an extremely low-voltage operation and power efficient successive-approximation-register (SAR) analog-to-digital converter (ADC). Tri-level comparator is proposed to relax the speed requirement of the comparator and decrease the resolution of internal Digital-to-Analog Converter (DAC) by 1-bit. The internal charge redistribution DAC employs unit capacitance of 0.5 fF and ADC operates at nearly thermal noise limitation. To deal with the problem of capacitor mismatch, reconfigurable capacitor array and calibration procedure were developed. The prototype ADC fabricated using 40 nm CMOS process achieves 46.8 dB SNDR and 58.2 dB SFDR with 1.1 MS/sec at 0.5 V power supply. The FoM is 6.3-fJ/conversion step and the chip die area is only 160 μm × 70 μm.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A framework for security on NoC technologies Multiple heterogeneous processor cores, memory cores and application specific IP cores integrated in a communication network, also known as networks on chips (NoCs), will handle a large number of applications including security. Although NoCs offer more resistance to bus probing attacks, power/EM attacks and network snooping attacks are relevant. For the first time, a framework for security on NoC at both the network level (or transport layer) and at the core level (or application layer) is proposed. At the network level, each IP core has a security wrapper and a key-keeper core is included in the NoC, protecting encrypted private and public keys. Using this framework, unencrypted keys are prevented from leaving the cores and NoC. This is crucial to prevent untrusted software on or off the NoC from gaining access to keys. At the core level (application layer) the security framework is illustrated with software modification for resistance against power attacks with extremely low overheads in energy. With the emergence of secure IP cores in the market and nanometer technologies, a security framework for designing NoCs is crucial for supporting future wireless Internet enabled devices.
Hardware-Assisted Detection of Malicious Software in Embedded Systems One of the critical security threats to computer systems is the execution of malware or malicious software. Several intrusion detection systems have been proposed which perform detection analysis in the software using the audit files generated by the operating system. Software-based solutions to this problem are relatively slow, so these techniques can be used forensically, but not in real-time to stop an exploit before it has an opportunity to do damage. We present a technique to implement intrusion detection for secure embedded systems by detecting behavioral differences between the correct system and the malware. The system is implemented using FPGA logic to enable the detection process to be regularly updated to adapt to new malware and changing system behavior.
Store-and-Forward Buffer Requirements in a Packet Switching Network Previous analytic models for packet switching networks have always assumed infinite storage capacity in store-store-and-forward (S/F) nodes. In this paper, we relax this assumption and present a model for a packet switching network in which each node has a finite pool of S/F buffers. A packet arriving at a node in which all S/F buffers are temporarily filled is discarded. The channel transmission control mechanisms of positive acknowledgment and time-out of packets are included in this model. Individual S/F nodes are analyzed separately as queueing networks with different classes of packets. The single node results are interfaced by imposing a continuity of flow constraint. A heuristic algorithm for determining a balanced assignment of nodal S/F buffer capacities is proposed. Numerical results for the performance of a 19 node network are illustrated.
Safely Preventing Unbounded Delays During Bus Transactions in FPGA-based SoC Advanced eXtensible Interface (AXI) is an open-standard communication bus interface implemented in most commercial off-the-shelf FPGA System-on-Chips (SoC) to exchange data within the chip. Unfortunately, the AXI standard does not mandate any mechanism to detect possible misbehavior of the connected modules. This work shows that this lack of specification has a relevant impact on popular implementations of the AXI bus. In particular, it is shown how it is easily possible to inject arbitrarily-long delays on modern FPGA system-on-chips under the presence of misbehaving bus masters. To safely solve this issue, this paper presents a general timing analysis to bound the execution of periodically-invoked hardware accelerators in nominal conditions. This timing analysis is then used to conFigure a latency-free hardware module named AXI Stall Monitor (ASM), also proposed in this paper, capable of detecting and safely solving possible stalls during AXI bus transactions. The ASM leaves a quantified flexibility to the hardware accelerators when deviating from nominal conditions. The contribution is finally supported by a set of experiments on the Zynq-7000 and Zynq Ultrascale+SoCs by Xilinx.
An Authenticated Encryption Based Security Framework for NoC Architectures Network on Chip (NoC) is an emerging solution to the existing scalability problems with SoC. However it is exposed to security threats like extraction of secret information from IP cores. In this paper we present an Authenticated Encryption (AE) based security framework for NoC based systems. The security framework resides in Network Interface (NI) of every secure IP core allowing secure communication among such IP cores. We simulated and implemented our framework using Verilog/VHDL modules on top of NoCem emulator. The results showed tolerable area overhead and did not affect the network performance apart from some initial latency.
SECA: security-enhanced communication architecture In this work, we propose and investigate the idea of enhancing a System-on-Chip (SoC) communication architecture (the fabric that integrates system components and carries the communication traffic between them) to facilitate higher security. We observe that a wide range of common security attacks are manifested as abnormalities in the system-level communication traffic. Therefore, the communication architecture, with its global system-level visibility, can be used to detect them. The communication architecture can also effectively react to security attacks by disallowing the offending communication transactions, or by notifying appropriate components of a security violation. We describe the general principles involved in a security-enhanced communication architecture (SECA) and show how several security objectives can be encoded in terms of policies that govern the inter-component communication traffic. We detail the implementation of SECA in the context of a popular commercial on-chip bus architecture (the AMBA architecture from ARM) through a combination of a centralized security enforcement module, and enhancements to the bus interfaces of system components. We illustrate how SECA can be used to enhance embedded system security in several application scenarios. A simple instance of SECA has been implemented in a commercial application processor SoC for mobile phones. We provide results of experiments performed to validate the proposed concepts through system-level simulation, and evaluate their overheads through hardware implementation using a commercial design flow.
Yet another MicroArchitectural Attack:: exploiting I-Cache MicroArchitectural Attacks (MA), which can be considered as a special form of Side-Channel Analysis, exploit microarchitectural functionalities of processor implementations and can compromise the security of computational environments even in the presence of sophisticated protection mechanisms like virtualization and sandboxing. This newly evolving research area has attracted significant interest due to the broad application range and the potentials of these attacks. Cache Analysis and Branch Prediction Analysis were the only types of MA that had been known publicly. In this paper, we introduce Instruction Cache (I-Cache) as yet another source of MA and present our experimental results which clearly prove the practicality and danger of I-Cache Attacks.
Grand Pwning Unit: Accelerating Microarchitectural Attacks with the GPU Dark silicon is pushing processor vendors to add more specialized units such as accelerators to commodity processor chips. Unfortunately this is done without enough care to security. In this paper we look at the security implications of integrated Graphical Processor Units (GPUs) found in almost all mobile processors. We demonstrate that GPUs, already widely employed to accelerate a variety of benign applications such as image rendering, can also be used to "accelerate" microarchitectural attacks (i.e., making them more effective) on commodity platforms. In particular, we show that an attacker can build all the necessary primitives for performing effective GPU-based microarchitectural attacks and that these primitives are all exposed to the web through standardized browser extensions, allowing side-channel and Rowhammer attacks from JavaScript. These attacks bypass state-of-the-art mitigations and advance existing CPU-based attacks: we show the first end-to-end microarchitectural compromise of a browser running on a mobile phone in under two minutes by orchestrating our GPU primitives. While powerful, these GPU primitives are not easy to implement due to undocumented hardware features. We describe novel reverse engineering techniques for peeking into the previously unknown cache architecture and replacement policy of the Adreno 330, an integrated GPU found in many common mobile platforms. This information is necessary when building shader programs implementing our GPU primitives. We conclude by discussing mitigations against GPU-enabled attackers.
Theory and Applications of Robust Optimization In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Mapping irregular applications to DIVA, a PIM-based data-intensive architecture
Dithering Skip Modulation, Width and Dead Time Controllers in Highly Efficient DC-DC Converters for System-On-Chip Applications This paper proposes temperature-independent load sensor (LS), optimum width controller (OWC), optimum dead-time controller (ODC), and tri-mode operation to achieve high efficiency over an ultra-wide-load range. Higher power efficiency and wider loading current range require rethinking the control method for DC-DC converters. Therefore, a highly efficient tri-mode DC-DC converter is invented in thi...
Simulation knowledge extraction and reuse in constrained random processor verification This work proposes a methodology of knowledge extraction from constrained-random simulation data. Feature-based analysis is employed to extract rules describing the unique properties of novel assembly programs hitting special conditions. The knowledge learned can be reused to guide constrained-random test generation towards uncovered corners. The experiments are conducted based on the verification environment of a commercial processor design, in parallel with the on-going verification efforts. The experimental results show that by leveraging the knowledge extracted from constrained-random simulation, we can improve the test templates to activate the assertions that otherwise are difficult to activate by extensive simulation.
An energy-efficient VLSI architecture for pattern recognition via deep embedding of computation in SRAM In this paper, we propose the concept of compute memory, where computation is deeply embedded into the memory (SRAM). This deep embedding enables multi-row read access and analog signal processing. Compute memory exploits the relaxed precision and linearity requirements of pattern recognition applications. System-level simulations incorporating various deterministic errors from analog signal chain demonstrates the limited accuracy of analog processing does not significantly degrade the system performance, which means the probability of pattern detection is minimally impacted. The estimated energy saving is 63 % as compared to the conventional system with standard embedded memory and parallel processing architecture, for 256×256 target image.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.075693
0.066667
0.066667
0.066667
0.042222
0.025393
0.002347
0.000003
0
0
0
0
0
0
Asymptotic stability for time-variant systems and observability: Uniform and nonuniform criteria This paper presents some new criteria for uniform and nonuniform asymptotic stability of equilibria for time-variant differential equations and this within a Lyapunov approach. The stability criteria are formulated in terms of certain observability conditions with the output derived from the Lyapunov function. For some classes of systems, this system theoretic interpretation proves to be fruitful since-after establishing the invariance of observability under output injection-this enables us to check the stability criteria on a simpler system. This procedure is illustrated for some classical examples.
Robustness of Adaptive Control under Time Delays for Three-Dimensional Curve Tracking. We analyze the robustness of a class of controllers that enable three-dimensional curve tracking by a free moving particle. The free particle tracks the closest point on the curve. By building a strict Lyapunov function and robustly forward invariant sets, we show input-to-state stability under predictable tolerance and safety bounds that guarantee robustness under control uncertainty, input delays, and a class of polygonal state constraints, including adaptive tracking and parameter identification under unknown control gains. Such an understanding may provide certified performance when the control laws are applied to real-life systems.
Isometric Torque Control for Neuromuscular Electrical Stimulation With Time-Varying Input Delay. Previous results have shown experimental evidence that the muscle response to neuromuscular electrical stimulation (NMES) is delayed; the time lag is often referred to as electromechanical delay. NMES closed-loop control methods have been developed to compensate for a known constant input delay. However, as a muscle fatigues, this delay increases. This paper develops a feedback controller that robustly compensates for the time-varying delay of an uncertain muscle model during isometric contractions. The controller is proven to yield global uniformly ultimately bounded torque tracking error. Experimental results illustrate the effectiveness of the developed controller and the time-varying nature of the delayed response.
On the convergence of a time-variant linear differential equation arising in identification. This paper discusses the asymptotic stability for a well-known time-variant system by means of the direct method of Liapunov. The system exhibits a positive time-invariant Liapunov function with negative semi-definite derivative. The paper focuses on the extra conditions needed in order to guarantee asymptotic stability. The proposed criterion is compared with the results available in the literature.
Local Stabilization of Nonlinear Systems Through the Reduction Model Approach. We study a general class of nonlinear systems with input delays of arbitrary size. We adapt the reduction model approach to prove local asymptotic stability of the closed loop input delayed systems, using feedbacks that may be nonlinear. Our Lyapunov-Krasovskii functionals make it possible to determine estimates of the basins of attraction for the closed loop systems.
Stabilization of nonlinear delay systems using approximate predictors and high-gain observers We provide a solution to the heretofore open problem of stabilization of systems with arbitrarily long delays at the input and output of a nonlinear system using output feedback only. The solution is global, employs the predictor approach over the period that combines the input and output delays, addresses nonlinear systems with sampled measurements and with control applied using a zero-order hold, and requires that the sampling/holding periods be sufficiently short, though not necessarily constant. Our approach considers a class of globally Lipschitz strict-feedback systems with disturbances and employs an appropriately constructed successive approximation of the predictor map, a high-gain sampled-data observer, and a linear stabilizing feedback for the delay-free system. The obtained results guarantee robustness to perturbations of the sampling schedule and different sampling and holding periods are considered. The approach is specialized to linear systems, where the predictor is available explicitly.
A framework for nonlinear sampled-data observer design via approximate discrete-time models and emulation We study observer design for sampled-data nonlinear systems using two approaches: (i) the observer is designed via an approximate discrete-time model of the plant; (ii) the observer is designed based on the continuous-time plant model and then discretized for sampled-data implementation (emulation). We investigate under what conditions, and in what sense, these designs achieve convergence for the unknown exact discrete-time model. We present examples which show that designs that violate our conditions may indeed lead to instability when implemented on the exact model.
Design and Stability Analysis of Networked Predictive Control Systems This brief is concerned with the networked predictive control and stability analysis for networked control systems (NCSs) with time-varying network communication delay. By taking the full advantage of the packet-based transmission in NCSs, a state-based networked predictive control approach is proposed to actively compensate the network communication delay. Based on switched system approach, stability analysis result is also established via the average dwell time technique. Finally, the effectiveness of the proposed method is illustrated by a practical experiment.
Long short-term memory. Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
The Impact of Data Aggregation in Wireless Sensor Networks Sensor networks are distributed event-based systems that differ from traditional communication networks in several ways: sensor networks have severe energy constraints, redundant low-rate data, and many-to-one flows. Data-centric mechanisms that perform in-network aggregation of data are needed in this setting for energy-efficient information flow. In this paper we model data-centric routing and compare its performance with traditional end-to-endrouting schemes. We examine the impact of source-destination placement and communication network density on the energy costs and delay associated with data aggregation. We show that data-centric routing offers significant performance gains across a wide range of operational scenarios. We also examine the complexity of optimal data aggregation, showing that although it is an NP-hard problem in general, there exist useful polynomial-time special cases.
Evolution on SoC Integration: GSM Baseband-Radio in 0.13 μm CMOS Extended by Fully Integrated Power Management Unit GSM baseband-radios system-on-chip (SoC) fabricated in CMOS technology are well established on the market. The next evolutionary step on the proceeding integration path is the extension of the baseband-radio functionality by further integration of power-management-unit (PMU) functionality. This PMU of a mobile phone is normally realized as a separate chip. The integration of the PMU promises lowest phone cost, eases PCB design, reduces phone development time, and enables overall system optimization. However, several integration challenges have to be tackled. An SoC is presented for which the aforementioned challenges were successfully managed. The achieved performance will be demonstrated with corresponding measurement results.
Design and Analysis of a Class-D Stage With Harmonic Suppression. This paper presents the design and analysis of a low-power Class-D stage in 90 nm CMOS featuring a harmonic suppression technique, which cancels the 3rd harmonic by shaping the output voltage waveform. Only digital circuits are used and the short-circuit current present in Class-D inverter-based output stages is eliminated, relaxing the buffer requirements. Using buffers with reduced drive strengt...
3.4 A 36Gb/s PAM4 transmitter using an 8b 18GS/S DAC in 28nm CMOS At data rates beyond 10Gb/s, most wireline links employ NRZ signaling. Serial NRZ links as high as 56Gb/s and 60Gb/s have been reported [1]. Nevertheless, as the rate increases, the constraints imposed by the channel, package, and die become more severe and do not benefit from process scaling in the same fashion that circuit design does. Reflections from impedance discontinuities in the PCB and package caused by vias and connectors introduce significant signal loss and distortions at higher frequencies. Even with an ideal channel, at every package-die interface, there is an intrinsic parasitic capacitance due to the pads and the ESD circuit amounting to at least 150fF, and a 50Ω resistor termination at both the transmit and receive ends resulting in an intrinsic pole at 23GHz or lower. In light of all these limitations, serial NRZ signaling beyond 60Gb/s appears suboptimal in terms of both power and performance. Utilizing various modulation techniques such as PAM4, one can achieve a higher spectral efficiency [2]. To enable such transmission formats, high-speed moderate-resolution data converters are required. This paper describes a 36Gb/s transmitter based on an 18GS/s 8b DAC implemented in 28nm CMOS, compliant to the new IEEE802.3bj standard for 100G Ethernet over backplane and copper cables [3].
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.106422
0.1025
0.1025
0.053868
0.02689
0.018372
0.000671
0.000011
0
0
0
0
0
0
A compact 87.1-dB DR bandwidth-scalable delta-sigma modulator based on dynamic gain-bandwidth-boosting inverter for audio applications This paper presents a compact audio delta-sigma modulator that features a scalable bandwidth to also support biomedical instrumentation such as digital hearing aids and electromyography, sustaining constant FoMS. The modulator achieves a small die area and low power consumption by exploiting the proposed dynamic gain-bandwidth-boosting (GBWB) scheme in the inverter-based class-AB OTA with minimal overhead. The modulator features 56.8 dB PSRR without any external decoupling capacitor for a power supply and 66.1 dB CMRR in the audio band by utilizing a pseudo-differential structure with the dynamic GBWB scheme. For 25 kHz bandwidth, the modulator dissipates 68 pW from a 1.8 V supply and achieves a peak SNDR of 84.0 dB, a peak SNR of 85.1 dB, and a DR of 87.1 dB, maintaining the resolution higher than 13-bit against the supply voltage variation of 0.15 V. The prototype modulator is fabricated in 0.18 pm CMOS technology, occupying an active area of 0.0939 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> .
Design-oriented estimation of thermal noise in switched-capacitor circuits. Thermal noise represents a major limitation on the performance of most electronic circuits. It is particularly important in switched circuits, such as the switched-capacitor (SC) filters widely used in mixed-mode CMOS integrated circuits. In these circuits, switching introduces a boost in the power spectral density of the thermal noise due to aliasing. Unfortunately, even though the theory of nois...
An Ultra-Low-Voltage 160 MS/s 7 Bit Interpolated Pipeline ADC Using Dynamic Amplifiers This paper presents a 0.55 V, 7 bit, 160 MS/s pipeline ADC using dynamic amplifiers. In this ADC, high-speed open-loop dynamic amplifiers with a common-mode detection technique are used as residue amplifiers to increase the ADC's speed, to enhance the robustness against supply voltage scaling, and to realize clock-scalable power consumption. To mitigate the absolute gain constraint of the residue amplifiers in a pipeline ADC, the interpolated pipeline architecture is employed to shift the gain requirement from absolute to relative accuracy. To show the new requirements of the residue amplifiers, the effects of gain mismatch and nonlinearity of the dynamic amplifiers are analyzed. The 7 bit prototype ADC fabricated in 90 nm CMOS demonstrates an ENOB of 6.0 bits at a conversion rate of 160 MS/s with an input close to the Nyquist frequency. At this conversion rate, it consumes 2.43 mW from a 0.55 V supply. The resulting FoM of the ADC is 240 fJ/conversion-step.
A 134-μW 99.4-dB SNDR Audio Continuous-Time Delta-Sigma Modulator With Chopped Negative-R and Tri-Level FIR-DAC This article presents a low-power audio continuous-time delta-sigma modulator (CTDSM) that employs a chopped negative-R and a tri-level finite impulse-response (FIR) DAC. The noise of the first opamp is mitigated by using a negative-R at the virtual ground of the first integrator, and the negative-R is then chopped to remove the intrinsic ${1/f}$ noise of the negative-R. The highly linear feedba...
A 0.4 V 63 $\mu$W 76.1 dB SNDR 20 kHz Bandwidth Delta-Sigma Modulator Using a Hybrid Switching Integrator. This paper presents a delta-sigma modulator operating at a supply voltage of 0.4 V. The designed delta-sigma modulator uses a proposed hybrid switching integrator and operates at a low supply voltage without clock boosting or bootstrapped switches. The proposed integrator consists of both switched-resistor and switched-capacitor operations and significantly reduces distortion at a low supply volta...
A 15.5 dB, wide signal swing, dynamic amplifier using a common-mode voltage detection technique This paper presents a high-speed, low-power and wide signal swing differential dynamic amplifier using a common-mode voltage detection technique. The proposed dynamic amplifier achieves a 15.5 dB gain with less than 1 dB drop over a signal swing of 1.3 Vpp at an operating frequency of 1.5 GHz with a VDD of 1.2 V in 90 nm CMOS. The power consumption of the proposed circuit can be reduced linearly with operating frequency lowering.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Chains of recurrences—a method to expedite the evaluation of closed-form functions Chains of Recurrences (CR's) are introduced as an effective method to evaluate functions at regular intervals. Algebraic properties of CR's are examined and an algorithm that constructs a CR for a given function is explained. Finally, an implementation of the method in MAXIMA/Common Lisp is discussed.
The PARSEC benchmark suite: characterization and architectural implications This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
Linear Amplification with Nonlinear Components A technique for producing bandpass linear amplification with nonlinear components (LINC) is described. The bandpass signal first is separated into two constant envelope component signals. All of the amplitude and phase information of the original bandpass signal is contained in phase modulation on the component signals. These constant envelope signals can be amplified or translated in frequency by amplifiers or mixers which have nonlinear input-output amplitude transfer characteristics. Passive linear combining of the amplified and/or translated component signals produces an amplified and/or translated replica of the original signal.
Opportunistic Information Dissemination in Mobile Ad-hoc Networks: The Profit of Global Synchrony The topic of this paper is the study of Information Dissemination in Mobile Ad-hoc Networks by means of deterministic protocols. We characterize the connectivity resulting from the movement, from failures and from the fact that nodes may join the computation at different times with two values, � and �, so that, withintime slots, some node that has the information must be connected to some node without it for at leasttime slots. The protocols studied are clas- sified into three classes: oblivious (the transmission schedule of a node is only a function of its ID), quasi-oblivious (the transmission schedule may also depend on a global time), and adaptive. The main contribution of this work concerns negative results. Contrasting the lower and upper bounds derived, interesting complexity gaps among protocol- classes are observed. More precisely, in order to guarantee any progress towards solving the problem, it is shown thatmust be at least n 1 in general, but that � 2 (n 2 /log n) if an oblivious protocol is used. Since quasi-oblivious protocols can guarantee progress with � 2 O(n), this represents a significant gap, almost linear in �, between oblivious and quasi-oblivious protocols. Regarding the time to complete the dissemination, a lower bound of (n� + n 3 /log n) is proved for oblivious protocols, which is tight up to a polylogarithmic factor because a constructive O(n� + n 3 log n) upper bound exists for the same class. It is also proved that adaptive protocols require (n� + n 2 ), which is optimal given that a matching upper bound can be proved for quasi-oblivious protocols. These results show that the gap in time complexity between oblivious and quasi- oblivious, and hence adaptive, protocols is almost linear. This gap is what we call the profit of global synchrony, since it represents the gain the network obtains from global synchrony with respect to not having it.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.2
0.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
0
A continuous-time ripple reduction technique for spinning-current Hall sensors The intrinsic offset of Hall sensors can be reduced with the help of the spinning-current technique, which modulates this offset away from the signal band. The resulting offset ripple can then be removed by a low-pass filter, which, however, limits the sensor's bandwidth. This paper presents a ripple-reduction technique that does not require a low-pass filter. Measurements on a Hall sensor system implemented in a 0.18μm CMOS process show that the technique can reduce the residual ripple by at least 40dB - to the same level as the sensor's noise.
Low Power CMOS-Based Hall Sensor with Simple Structure Using Double-Sampling Delta-Sigma ADC. A CMOS (Complementary metal-oxide-semiconductor) Hall sensor with low power consumption and simple structure is introduced. The tiny magnetic signal from Hall device could be detected by a high-resolution delta-sigma ADC in presence of offset and flickering noise. Also, the offset as well as the flickering noise are effectively suppressed by the current spinning technique combined with double sampling switches of the ADC. The double sampling scheme of the ADC reduces the operating frequency and helps to reduce the power consumption. The prototype Hall sensor is fabricated in a 0.18-mu m CMOS process, and the measurement shows detection range of +/- 150 mT and sensitivity of 110 mu V/mT. The size of active area is 0.7 mm(2), and the total power consumption is 4.9 mW. The proposed system is advantageous not only for low power consumption, but also for small sensor size due to its simplicity.
An integrated fluxgate magnetometer for use in closed-loop/open-loop isolated current sensing. This paper presents two integrated magnetic sensor ICs for isolated current sensing. Both employ an integrated fluxgate magnetometer with a sensitivity of 250V/T and a 500ksps readout circuit. Only 5.4mW is required to excite the sensor, which is 20x more power efficient than the state-of-theart. With an external magnetic core, the resulting closed-loop current sensor IC achieves a dynamic range of 112dB and a non linearity below 0.03%, while the open-loop current sensor IC has a dynamic range of 100(1B and a non-linearity below 0.2%.
A Fast T&H Overcurrent Detector for a Spinning Hall Current Sensor With Ping-Pong and Chopping Techniques This paper presents a fast spinning-current Hall sensor with 568 ns overall delay for sub-microsecond overcurrent detection (OCD) in a magnetic current sensor. By combining the continuous-time chopping techniques and discrete-time dynamic offset cancellation techniques, the spinning frequency of 250 kHz does not limit the sensor speed. The proposed track-and-hold (T&H) ping-pong comparators extend the usage of auto-zeroing techniques for sensor interface applications. The design achieves a magnetic residual offset of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$85~\mu \text{T}$ </tex-math></inline-formula> (mean) and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$79~\mu \text{T}$ </tex-math></inline-formula> ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1\sigma$ </tex-math></inline-formula> ), while the offset drifts only <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$0.68~\mu \text{T}/^{\circ }\text{C}$ </tex-math></inline-formula> (mean) and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$0.27~\mu \text{T}/^{\circ }\text{C}$ </tex-math></inline-formula> ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1\sigma$ </tex-math></inline-formula> ) from −40 °C to 150 °C. In addition, a background switched-capacitor filter breaks the limitation of high-frequency errors on conventional correlated double sampling techniques. The design thus reduces the input-referred noise to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$136~\mu \text{T}_{\mathrm {rms}}$ </tex-math></inline-formula> with a bandwidth of 1.7 MHz, while consuming at least 30% less power than the other state-of-the-art designs. Moreover, the analog stress compensation with temperature coefficient (TC) correction guarantees an overall threshold error within ±4% over package stress and temperature.
A Monolithic CMOS Magnetic Hall Sensor with High Sensitivity and Linearity Characteristics. This paper presents a fully integrated linear Hall sensor by means of 0.8 m high voltage complementary metal-oxide semiconductor (CMOS) technology. This monolithic Hall sensor chip features a highly sensitive horizontal switched Hall plate and an efficient signal conditioner using dynamic offset cancellation technique. An improved cross-like Hall plate achieves high magnetic sensitivity and low offset. A new spinning current modulator stabilizes the quiescent output voltage and improves the reliability of the signal conditioner. The tested results show that at the 5 V supply voltage, the maximum Hall output voltage of the monolithic Hall sensor microsystem, is up to +/- 2.1 V and the linearity of Hall output voltage is higher than 99% in the magnetic flux density range from +/- 5 mT to +/- 175 mT. The output equivalent residual offset is 0.48 mT and the static power consumption is 20 mW.
A weak magnetic field measurement system using micro-fluxgate sensors and delta-sigma interface. This paper presents a weak magnetic field measurement system using micro-fluxgate (FG) sensors and a sensor signal processing technique using the delta-sigma modulation in the negative feedback loop. The feedback of the lowpass filtered bitstream output of a delta-sigma modulator to the magnetic field improves system linearity, hysteresis, and stability. In spite of the fact that the second-order ...
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
Measurement issues in galvanic intrabody communication: influence of experimental setup Significance: The need for increasingly energyefficient and miniaturized bio-devices for ubiquitous health monitoring has paved the way for considerable advances in the investigation of techniques such as intrabody communication (IBC), which uses human tissues as a transmission medium. However, IBC still poses technical challenges regarding the measurement of the actual gain through the human body. The heterogeneity of experimental setups and conditions used together with the inherent uncertainty caused by the human body make the measurement process even more difficult. Goal: The objective of this work, focused on galvanic coupling IBC, is to study the influence of different measurement equipments and conditions on the IBC channel. Methods: different experimental setups have been proposed in order to analyze key issues such as grounding, load resistance, type of measurement device and effect of cables. In order to avoid the uncertainty caused by the human body, an IBC electric circuit phantom mimicking both human bioimpedance and gain has been designed. Given the low-frequency operation of galvanic coupling, a frequency range between 10 kHz and 1 MHz has been selected. Results: the correspondence between simulated and experimental results obtained with the electric phantom have allowed us to discriminate the effects caused by the measurement equipment. Conclusion: this study has helped us obtain useful considerations about optimal setups for galvanic-type IBC as well as to identify some of the main causes of discrepancy in the literature.
On the minimal synchronism needed for distributed consensus Reaching agreement is a primitive of distributed computing. While this poses no problem in an ideal, failure-free environment, it imposes certain constraints on the capabilities of an actual system: a system is viable only if it permits the existence of consensus protocols tolerant to some number of failures. Fischer, Lynch and Paterson [FLP] have shown that in a completely asynchronous model, even one failure cannot be tolerated. In this paper we extend their work, identifying several critical system parameters, including various synchronicity conditions, and examine how varying these affects the number of faults which can be tolerated. Our proofs expose general heuristic principles that explain why consensus is possible in certain models but not possible in others.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Wideband Balun-LNA With Simultaneous Output Balancing, Noise-Canceling and Distortion-Canceling An inductorless low-noise amplifier (LNA) with active balun is proposed for multi-standard radio applications between 100 MHz and 6 GHz. It exploits a combination of a common-gate (CGH) stage and an admittance-scaled common-source (CS) stage with replica biasing to maximize balanced operation, while simultaneously canceling the noise and distortion of the CG-stage. In this way, a noise figure (NF) close to or below 3 dB can be achieved, while good linearity is possible when the CS-stage is carefully optimized. We show that a CS-stage with deep submicron transistors can have high IIP2, because the nugsldr nuds cross-term in a two-dimensional Taylor approximation of the IDS(VGS, VDS) characteristic can cancel the traditionally dominant square-law term in the IDS(VGS) relation at practical gain values. Using standard 65 nm transistors at 1.2 V supply voltage, we realize a balun-LNA with 15 dB gain, NF < 3.5 dB and IIP2 > +20 dBm, while simultaneously achieving an IIP3 > 0 dBm. The best performance of the balun is achieved between 300 MHz to 3.5 GHz with gain and phase errors below 0.3 dB and plusmn2 degrees. The total power consumption is 21 mW, while the active area is only 0.01 mm2.
Sensor network gossiping or how to break the broadcast lower bound Gossiping is an important problem in Radio Networks that has been well studied, leading to many important results. Due to strong resouce limitations of sensor nodes, previous solutions are frequently not feasible in Sensor Networks. In this paper, we study the gossiping problem in the restrictive context of Sensor Networks. By exploiting the geometry of sensor node distributions, we present reduced, optimal running time of O(D + Δ) for an algorithm that completes gossiping with high probability in a Sensor Network of unknown topology and adversarial wake-up, where D is the diameter and Δ the maximum degree of the network. Given that an algorithm for gossiping also solves the broadcast problem, our result proves that the classic lower bound of [16] can be broken if nodes are allowed to do preprocessing.
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.072306
0.066667
0.041333
0.033333
0.011667
0.000889
0
0
0
0
0
0
0
0
Time-free and timer-based assumptions can be combined to obtain eventual leadership Leader-based protocols rest on a primitive able to provide the processes with the same unique leader. Such protocols are very common in distributed computing to solve synchronization or coordination problems. Unfortunately, providing such a primitive is far from being trivial in asynchronous distributed systems prone to process crashes. (It is even impossible in fault-prone purely asynchronous systems.) To circumvent this difficulty, several protocols have been proposed that build a leader facility on top of an asynchronous distributed system enriched with additional assumptions. The protocols proposed so far consider either additional assumptions based on synchrony or additional assumptions on the pattern of the messages that are exchanged. Considering systems with n processes and up to f process crashes, 1lesf<n, this paper investigates the combination of a time-free assumption on the message pattern with a synchrony assumption on process speed and message delay. It shows that both types of assumptions can be combined to obtain a hybrid eventual leader protocol benefiting from the best of both worlds. This combined assumption considers a star communication structure involving f+1 processes. Its noteworthy feature lies in the level of combination of both types of assumption that is "as fine as possible" in the sense that each of the f channels of the star has to satisfy a property independently of the property satisfied by each of the f-1 other channels (the f channels do not have to satisfy the same assumption). More precisely, this combined assumption is the following: There is a correct process p (center of the star) and a set Q of f processes q (pnotinQ) such that, eventually, either 1) each time it broadcasts a query, q receives a response from p among the (n-f) first responses to that query, or 2) the channel from p to q is timely. (The processes in the set Q can crash.) A surprisingly simple eventual leader protocol based on this fine grain hybrid assump- - tion is proposed and proved correct. An improvement is also presented
Eventual Leader Election with Weak Assumptions on Initial Knowledge, Communication Reliability, and Synchrony This paper considers the eventual leader election problem in asynchronous message-passing systems where an arbitrary number t of processes can crash (t < n, where n is the total number of processes). It considers weak assumptions both on the initial knowledge of the processes and on the network behavior. More precisely, initially, a process knows only its identity and the fact that the process identities are difierent and totally ordered (it knows neither n nor t). Two eventual leader election protocols and a lower bound are presented. The flrst protocol assumes that a process also knows a lower bound fi on the number of processes that do not crash. This protocol requires the following behavioral properties from the underlying network: the graph made up of the correct processes and fair lossy links is strongly connected, and there is a correct process connected to (n ¡ f) ¡ fi other correct processes (where f is the actual number of crashes in the considered run) through eventually timely paths (paths made up of correct processes and eventually timely links). This protocol is not communication-e-cient in the sense that each correct process has to send messages forever. The second protocol is communication-e-cient: after some time, only the flnal common leader has to send messages forever. This protocol does not require the processes to know fi, but requires stronger properties from the underlying network: each pair of correct processes has to be connected by fair lossy links (one in each direction), and there is a correct process whose n ¡ f ¡ 1 output links to the rest of correct processes have to be eventually timely. A matching lower bound result shows that any eventual leader election protocol must have runs with this number of eventually timely links, even if all processes know all the processes identities. In addition to being communication-e-cient, the second protocol has another noteworthy e-ciency property, namely, be the run flnite or inflnite, all the local variables and message flelds have a flnite domain in the run.
Principles of Distributed Systems, 13th International Conference, OPODIS 2009, Nîmes, France, December 15-18, 2009. Proceedings
Reliable Broadcast in Radio Networks with Locally Bounded Failures This paper studies the reliable broadcast problem in a radio network with locally bounded failures. We present a sufficient condition for achievability of reliable broadcast in a general graph subject to Byzantine/crash-stop failures. We then consider the problem of reliable broadcast in an infinite grid (or finite toroidal) radio network under Byzantine and crash-stop failures. We present bounds on the maximum number of failures that may occur in any given neighborhood without rendering reliable broadcast impossible. For the Byzantine failure model, we describe an algorithm which is optimal for the grid network model, as it tolerates faults up to a previously established upper bound for this model. Our results indicate that it is possible to achieve reliable broadcast if slightly less than one-fourth fraction of nodes in any neighborhood are faulty. We also show that reliable broadcast is achievable with crash-stop failures if slightly less than half the nodes in any given neighborhood may be faulty.
Implementing the Omega failure detector in the crash-recovery failure model Unreliable failure detectors are mechanisms providing information about process failures, that allow solving several problems in asynchronous systems, e.g., Consensus. A particular class of failure detectors, Omega, provides an eventual leader election functionality. This paper addresses the implementation of Omega in the crash-recovery failure model. Recently we have proposed an algorithm assuming that eventually the correct process with the smallest identifier and minimum incarnation number can communicate timely with the rest of processes. Here we propose two Omega algorithms which assume only that processes are reachable from some correct process, independently of its identifier and incarnation number. The first one requires the membership to be known a priori, while the second one relaxes this assumption too.
Eventual Leader Election in Infinite Arrival Message-Passing System Model with Bounded Concurrency We study the failure detection problem in a message-passing system that may dynamically change over time, so that the number of processes which make progress during a computation may grow to infinity as time tends to infinity but the number of concurrently up processes do not exceed a known bound. We first propose the specification of a new oracle, called HB*, able to give hints on which processes are making progress in the system. A possible HB* implementation is given. Then, we show how to use HB* to implement the oracle Ω that eventually identifies a unique leader in the system. To the best of our knowledge this is the first implementation of Ω running in a message passing system with infinitely many processes.
Reaching Agreement in the Presence of Faults The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor.It is shown that the problem is solvable for, and only for, n ≥ 3m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0. This weaker assumption can be approximated in practice using cryptographic methods.
Information spreading in stationary Markovian evolving graphs Markovian evolving graphs [2] are dynamic-graph models where the links among a fixed set of nodes change during time according to an arbitrary Markovian rule. They are extremely general and they can well describe important dynamic-network scenarios.
How to share a secret In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
COCA: A secure distributed online certification authority COCA is a fault-tolerant and secure on-line certification authority that has been built and deployed both in a local area network and in the Internet. Replication is used to achieve availability; proactive recovery with threshold cryptography is used for digitally signing certificates in a way that defends against mobile adversaries which attack, compromise, and control one replica for a limited period of time before moving on to another. Relatively weak assumptions characterize environments in which COCA''s protocols will execute correctly. No assumption is made about execution speed and message delivery delays; channels are expected to exhibit only intermittent reliability; and with 3t+1 COCA servers up to t may be faulty or compromised. The result is a system with inherent defenses to certain denial of service attacks because, by their very nature, weak assumptions are difficult for attackers to invalidate. In addition, traditional techniques, including request authorization, resource management based on segregation and scheduling different classes of requests, as well as caching results of expensive cryptographic operations further reduce COCA''s vulnerability to denial of service attacks. Results from experiments in a local area network and the Internet allow a quantitative evaluation of the various means COCA employs to resist denial of service attacks.
Exploiting availability prediction in distributed systems Loosely-coupled distributed systems have significant scale and cost advantages over more traditional architectures, but the availability of the nodes in these systems varies widely. Availability modeling is crucial for predicting per-machine resource burdens and understanding emergent, system-wide phenomena. We present new techniques for predicting availability and test them using traces taken from three distributed systems. We then describe three applications of availability prediction. The first, availability-guided replica placement, reduces object copying in a distributed data store while increasing data availability. The second shows how availability prediction can improve routing in delay-tolerant networks. The third combines availability prediction with virus modeling to improve forecasts of global infection dynamics.
A 41-phase switched-capacitor power converter with 3.8mV output ripple and 81% efficiency in baseline 90nm CMOS.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.033458
0.0325
0.03003
0.03003
0.023915
0.017089
0.003517
0.000231
0
0
0
0
0
0
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Estimating continuous distributions in Bayesian classifiers When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous variables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon the normality assumption and instead use statistical methods for nonparametric density estimation. For a naive Bayesian classifier, we present experimental results on a variety of natural and artificial domains, comparing two methods of density estimation: assuming normality and modeling each conditional distribution with a single Gaussian; and using nonparametric kernel density estimation. We observe large reductions in error on several natural and artificial data sets, which suggests that kernel estimation is a useful tool for learning Bayesian models.
InterCloud: utility-oriented federation of cloud computing environments for scaling of application services Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments The proposed InterCloud environment supports scaling of applications across multiple vendor clouds We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.
Practical delegation of computation using multiple servers The current move to Cloud Computing raises the need for verifiable delegation of computations, where a weak client delegates his computation to a powerful server, while maintaining the ability to verify that the result is correct. Although there are prior solutions to this problem, none of them is yet both general and practical for real-world use. We demonstrate a relatively efficient and general solution where the client delegates the computation to several servers, and is guaranteed to determine the correct answer as long as even a single server is honest. We show: A protocol for any efficiently computable function, with logarithmically many rounds, based on any collision-resistant hash family. The protocol is set in terms of Turing Machines but can be adapted to other computation models. An adaptation of the protocol for the X86 computation model and a prototype implementation, called Quin, for Windows executables. We describe the architecture of Quin and experiment with several parameters on live clouds. We show that the protocol is practical, can work with nowadays clouds, and is efficient both for the servers and for the client.
Dependency Mining for Service Resilience at the Edge Edge computing paradigm is prone to failures as it trades reliability against other quality of service properties such as low latency and geographical prevalence. Therefore, software services that run on edge infrastructure must rely on failure resilience techniques for uninterrupted delivery. Unique combination of hardware, software, and network characteristics of edge services is not addressed by existing techniques that are designed or tailored for cloud services. In this work, we propose a novel method for evaluating the resilience of replicated edge services, which exploits failure dependencies between edge servers to forecast probability of service interruption. This is done by analyzing historical failure logs of individual servers, modeling temporal dependencies as a dynamic Bayesian network, and inferring the probability that certain number of servers fail concurrently. Furthermore, we propose two replica scheduling algorithms that optimize different criteria in resilient service deployment, namely failure probability and cost of redundancy.
EFPO - Energy Efficient and Failure Predictive Edge Offloading.
Adaptive clustering for mobile wireless networks This paper describes a self-organizing, multihop, mobile radio network which relies on a code-division access scheme for multimedia support. In the proposed network architecture, nodes are organized into nonoverlapping clusters. The clusters are independently controlled, and are dynamically reconfigured as the nodes move. This network architecture has three main advantages. First, it provides spatial reuse of the bandwidth due to node clustering. Second, bandwidth can be shared or reserved in a controlled fashion in each cluster. Finally, the cluster algorithm is robust in the face of topological changes caused by node motion, node failure, and node insertion/removal. Simulation shows that this architecture provides an efficient, stable infrastructure for the integration of different types of traffic in a dynamic radio network
Computing size-independent matrix problems on systolic array processors A methodology to transform dense to band matrices is presented in this paper. This transformation, is accomplished by triangular blocks partitioning, and allows the implementation of solutions to problems with any given size, by means of contraflow systolic arrays, originally proposed by H.T. Kung. Matrix-vector and matrix-matrix multiplications are the operations considered here.The proposed transformations allow the optimal utilization of processing elements (PEs) of the systolic array when dense matrix are operated. Every computation is made inside the array by using adequate feedback. The feedback delay time depends only on the systolic array size.
Multilevel k-way hypergraph partitioning In this paper, we present a new multilevel k-way hypergraph parti- tioning algorithm that substantially outperforms the existing state- of-the-art K-PM/LR algorithm for multi-way partitioning. both for optimizing local as well as global objectives. Experiments on the ISPD98 benchmark suite show that the partitionings produced by our scheme are on the average 15% to 23% better than those pro- duced by the K-PM/LR algorithm, both in terms of the hyperedge cut as well as the K 1 metric. Furthermore, our algorithm is sig- nificantly faster, requiring 4 to 5 times less time than that required by K-PM/LR.
Identifying and Filtering Near-Duplicate Documents The mathematical concept of document resemblance captures well the informal notion of syntactic similarity. The resemblance can be estimated using a fixed size "sketch" for each document. For a large collection of documents (say hundreds of millions) the size of this sketch is of the order of a few hundred bytes per document. However, for effcient large scale web indexing it is not necessary to determine the actual resemblance value: it suffces to determine whether newly encountered documents are duplicates or near-duplicates of documents already indexed. In other words, it suffces to determine whether the resemblance is above a certain threshold. In this talk we show how this determination can be made using a "sample" of less than 50 bytes per document. The basic approach for computing resemblance has two aspects: first, resemblance is expressed as a set (of strings) intersection problem, and second, the relative size of intersections is evaluated by a process of random sampling that can be done independently for each document. The process of estimating the relative size of intersection of sets and the threshold test discussed above can be applied to arbitrary sets, and thus might be of independent interest. The algorithm for filtering near-duplicate documents discussed here has been successfully implemented and has been used for the last three years in the context of the AltaVista search engine.
On receding horizon feedback control Receding horizon feedback control (RHFC) was originally introduced as an easy method for designing stable state-feedback controllers for linear systems. Here those results are generalized to the control of nonlinear autonomous systems, and we develop a performance index which is minimized by the RHFC (inverse optimal control problem). Previous results for linear systems have shown that desirable nonlinear controllers can be developed by making the RHFC horizon distance a function of the state. That functional dependence was implicit and difficult to implement on-line. Here we develop similar controllers for which the horizon distance is an easily computed explicit function of the state.
Recurrent-Fuzzy-Neural-Network-Controlled Linear Induction Motor Servo Drive Using Genetic Algorithms A recurrent fuzzy neural network (RFNN) controller based on real-time genetic algorithms (GAs) is developed for a linear induction motor (LIM) servo drive in this paper. First, the dynamic model of an indirect field-oriented LIM servo drive is derived. Then, an online training RFNN with a backpropagation algorithm is introduced as the tracking controller. Moreover, to guarantee the global convergence of tracking error, a real-time GA is developed to search the optimal learning rates of the RFNN online. The GA-based RFNN control system is proposed to control the mover of the LIM for periodic motion. The theoretical analyses for the proposed GA-based RFNN controller are described in detail. Finally, simulated and experimental results show that the proposed controller provides high-performance dynamic characteristics and is robust with regard to plant parameter variations and external load disturbance
Variable Off-Time Control Loop for Current-Mode Floating Buck Converters in LED Driving Applications A versatile controller architecture, used in current-mode floating buck converters for LED driving, is developed. State-of-the-art controllers rely on a fixed switching period and variable duty cycle, focusing on current averaging circuits. Instead, the proposed controller architecture is based on fixed peak current and adaptable off time as the average current control method. The control loop is comprised of an averaging block, transconductance amplifier, and an innovative time modulator. This modulator is intended to provide constant control loop response regardless of input voltage, current storage inductor, and number of LEDs in order to improve converter applicability for LED drivers. Fabricated in a 5 V standard 0.5 μm CMOS technology, the prototype controller is implemented and tested in a current-mode floating buck converter. The converter exhibits sound continuous conduction mode (CCM) operation for input voltages between 11 and 20 V, and a wide inductor range of 100-1000 μH. In all instances, the measured average LED current variation was lower than 10% of the desired value. A maximum conversion efficiency of 91% is obtained when driving 50 mA through four LEDs (with 14 V input voltage and an inductor of 470 μH). A stable CCM converter operation is also proven by simulation for nine LEDs and 45 V input voltage.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.0926
0.067
0.06
0.06
0.06
0.03
0.00001
0
0
0
0
0
0
0
Stride 2 1-D, 2-D, and 3-D Winograd for Convolutional Neural Networks Convolutional neural networks (CNNs) have been widely adopted for computer vision applications. CNNs require many multiplications, making their use expensive in terms of both computational complexity and hardware. An effective method to mitigate the number of required multiplications is via the Winograd algorithm. Previous implementations of CNNs based on Winograd use the 2-D algorithm <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$F(2 \times 2,3 \times 3)$ </tex-math></inline-formula> , which reduces computational complexity by a factor of 2.25 over regular convolution. However, current Winograd implementations only apply when using a stride (shift displacement of a kernel over an input) of 1. In this article, we presented a novel method to apply the Winograd algorithm to a stride of 2. This method is valid for one, two, or three dimensions. We also introduced new Winograd versions compatible with a kernel of size 3, 5, and 7. The algorithms were successfully implemented on an NVIDIA K20c GPU. Compared to regular convolutions, the implementations for stride 2 are 1.44 times faster for a <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3 \times 3$ </tex-math></inline-formula> kernel, <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.04\times $ </tex-math></inline-formula> faster for a <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$5\times 5$ </tex-math></inline-formula> kernel, <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.42\times $ </tex-math></inline-formula> faster for a <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$7 \times 7$ </tex-math></inline-formula> kernel, and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1.73\times $ </tex-math></inline-formula> faster for a <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3 \times 3 \times 3$ </tex-math></inline-formula> kernel. Additionally, a CNN accelerator using a novel processing element (PE) performs two 2-D Winograd stride 1, or one 2-D Winograd stride 2, and operations per clock cycle was implemented on an Intel Arria-10 field-programmable gate array (FPGA). We accelerated the original and our proposed modified VGG-16 architectures and achieved digital signal processor (DSP) efficiencies of 1.22 giga operations per second (GOPS)/DSPs and 1.33 GOPS/DSPs, respectively.
U-Net: Convolutional Networks for Biomedical Image Segmentation There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.
Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs The Winograd or Cook-Toom class of algorithms help to reduce the overall compute complexity of many modern deep convolutional neural networks (CNNs). Although there has been a lot of research done on models and algorithmic optimizations of CNN, little attention has been paid to the efficient implementation of these algorithms on embedded CPUs, which usually have very limited memory and low power budget. This paper aims to fill this gap and focuses on the efficient implementation of Winograd or Cook-Toom based convolution on modern Arm Cortex-A CPUs, widely used in mobile devices today. Specifically, we demonstrate a reduction in inference latency by using a set of optimization strategies that improve the utilization of computational resources, and by effectively leveraging the ARMv8-A NEON SIMD instruction set. We evaluated our proposed region-wise multi-channel implementations on Arm Cortex-A73 platform using several representative CNNs. The results show significant performance improvements in full network, up to 60%, over existing im2row/im2col based optimization techniques.
A High-Throughput and Power-Efficient FPGA Implementation of YOLO CNN for Object Detection Convolutional neural networks (CNNs) require numerous computations and external memory accesses. Frequent accesses to off-chip memory cause slow processing and large power dissipation. For real-time object detection with high throughput and power efficiency, this paper presents a Tera-OPS streaming hardware accelerator implementing a you-only-look-once (YOLO) CNN. The parameters of the YOLO CNN are retrained and quantized with the PASCAL VOC data set using binary weight and flexible low-bit activation. The binary weight enables storing the entire network model in block RAMs of a field-programmable gate array (FPGA) to reduce off-chip accesses aggressively and, thereby, achieve significant performance enhancement. In the proposed design, all convolutional layers are fully pipelined for enhanced hardware utilization. The input image is delivered to the accelerator line-by-line. Similarly, the output from the previous layer is transmitted to the next layer line-by-line. The intermediate data are fully reused across layers, thereby eliminating external memory accesses. The decreased dynamic random access memory (DRAM) accesses reduce DRAM power consumption. Furthermore, as the convolutional layers are fully parameterized, it is easy to scale up the network. In this streaming design, each convolution layer is mapped to a dedicated hardware block. Therefore, it outperforms the “one-size-fits-all” designs in both performance and power efficiency. This CNN implemented using VC707 FPGA achieves a throughput of 1.877 tera operations per second (TOPS) at 200 MHz with batch processing while consuming 18.29 W of on-chip power, which shows the best power efficiency compared with the previous research. As for object detection accuracy, it achieves a mean average precision (mAP) of 64.16% for the PASCAL VOC 2007 data set that is only 2.63% lower than the mAP of the same YOLO network with full precision.
Ara: A 1-GHz+ Scalable and Energy-Efficient RISC-V Vector Processor With Multiprecision Floating-Point Support in 22-nm FD-SOI. In this article, we present Ara, a 64-bit vector processor based on the version 0.5 draft of RISC-V&#39;s vector extension, implemented in GlobalFoundries 22FDX fully depleted silicon-on-insulator (FD-SOI) technology. Ara&#39;s microarchitecture is scalable, as it is composed of a set of identical lanes, each containing part of the processor&#39;s vector register file and functional units. It achieves up to 9...
Winograd Convolution for DNNs: Beyond linear polinomials. We investigated a wider range of Winograd family convolution algorithms for Deep Neural Network. We presented the explicit Winograd convolution algorithm in general case (used the polynomials of the degrees higher than one). It allows us to construct more different versions in the aspect of performance than commonly used Winograd convolution algorithms and improve the accuracy and performance of convolution computations. We found that in $fp16$ this approach gives us better accuracy of image recognition while keeps the same number of general multiplications computed per single output point as the commonly used Winograd algorithm for a kernel of the size $3 \times 3$ and output size equal to $4 \times 4$. We demonstrated that in $bf16$ it is possible to perform the convolution computation faster keeping the accuracy of image recognition the same as for direct convolution method. We tested our approach for a subset of $2000$ images from Imaginet validation set. We present the results for three precision of computations $fp32$, $fp16$ and $bf16$.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
The GPU Computing Era GPU computing is at a tipping point, becoming more widely used in demanding consumer applications and high-performance computing. This article describes the rapid evolution of GPU architectures—from graphics processors to massively parallel many-core multiprocessors, recent developments in GPU computing architectures, and how the enthusiastic adoption of CPU+GPU coprocessing is accelerating parallel applications.
MorphoSys: An Integrated Reconfigurable System for Data-Parallel and Computation-Intensive Applications This paper introduces MorphoSys, a reconfigurable computing system developed to investigate the effectiveness of combining reconfigurable hardware with general-purpose processors for word-level, computation-intensive applications. MorphoSys is a coarse-grain, integrated, and reconfigurable system-on-chip, targeted at high-throughput and data-parallel applications. It is comprised of a reconfigurable array of processing cells, a modified RISC processor core, and an efficient memory interface unit. This paper describes the MorphoSys architecture, including the reconfigurable processor array, the control processor, and data and configuration memories. The suitability of MorphoSys for the target application domain is then illustrated with examples such as video compression, data encryption and target recognition. Performance evaluation of these applications indicates improvements of up to an order of magnitude (or more) on MorphoSys, in comparison with other systems.
Pinning adaptive synchronization of a general complex dynamical network There are two challenging fundamental questions in pinning control of complex networks: (i) How many nodes should a network with fixed network structure and coupling strength be pinned to reach network synchronization? (ii) How much coupling strength should a network with fixed network structure and pinning nodes be applied to realize network synchronization? To fix these two questions, we propose a general complex dynamical network model and then further investigate its pinning adaptive synchronization. Based on this model, we attain several novel adaptive synchronization criteria which indeed give the positive answers to these two questions. That is, we provide a simply approximate formula for estimating the detailed number of pinning nodes and the magnitude of the coupling strength for a given general complex dynamical network. Here, the coupling-configuration matrix and the inner-coupling matrix are not necessarily symmetric. Moreover, our pinning adaptive controllers are rather simple compared with some traditional controllers. A Barabási–Albert network example is finally given to show the effectiveness of the proposed synchronization criteria.
Wireless communications in the twenty-first century: a perspective Wireless communications are expected to be the dominant mode of access technology in the next century. Besides voice, a new range of services such as multimedia, high-speed data, etc. are being offered for delivery over wireless networks. Mobility will be seamless, realizing the concept of persons being in contact anywhere, at any time. Two developments are likely to have a substantial impact on t...
Minimum-Cost Data Delivery in Heterogeneous Wireless Networks With various wireless technologies developed, a ubiquitous and integrated architecture is envisioned for future wireless communication. An important optimization issue in such an integrated system is how to minimize the overall communication cost by intelligently utilizing the available heterogeneous wireless technologies while, at the same time, meeting the quality-of-service requirements of mobi...
SPECS: A Lightweight Runtime Mechanism for Protecting Software from Security-Critical Processor Bugs Processor implementation errata remain a problem, and worse, a subset of these bugs are security-critical. We classified 7 years of errata from recent commercial processors to understand the magnitude and severity of this problem, and found that of 301 errata analyzed, 28 are security-critical. We propose the SECURITY-CRITICAL PROCESSOR ER- RATA CATCHING SYSTEM (SPECS) as a low-overhead solution to this problem. SPECS employs a dynamic verification strategy that is made lightweight by limiting protection to only security-critical processor state. As a proof-of- concept, we implement a hardware prototype of SPECS in an open source processor. Using this prototype, we evaluate SPECS against a set of 14 bugs inspired by the types of security-critical errata we discovered in the classification phase. The evaluation shows that SPECS is 86% effective as a defense when deployed using only ISA-level state; incurs less than 5% area and power overhead; and has no software run-time overhead.
An Event-Driven Quasi-Level-Crossing Delta Modulator Based on Residue Quantization This article introduces a digitally intensive event-driven quasi-level-crossing (quasi-LC) delta-modulator analog-to-digital converter (ADC) with adaptive resolution (AR) for Internet of Things (IoT) wireless networks, in which minimizing the average sampling rate for sparse input signals can significantly reduce the power consumed in data transmission, processing, and storage. The proposed AR quasi-LC delta modulator quantizes the residue voltage signal with a 4-bit asynchronous successive-approximation-register (SAR) sub-ADC, which enables a straightforward implementation of LC and AR algorithms in the digital domain. The proposed modulator achieves data compression by means of a globally signal-dependent average sampling rate and achieves AR through a digital multi-level comparison window that overcomes the tradeoff between the dynamic range and the input bandwidth in the conventional LC ADCs. Engaging the AR algorithm reduces the average sampling rate by a factor of 3 at the edge of the modulator’s signal bandwidth. The proposed modulator is fabricated in 28-nm CMOS and achieves a peak SNDR of 53 dB over a signal bandwidth of 1.42 MHz while consuming 205 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> and an active area of 0.0126 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> .
1.066667
0.066667
0.066667
0.066667
0.066667
0.066667
0.007407
0
0
0
0
0
0
0
A Low-Profile Autonomous Interface Circuit for Piezoelectric Micro-Power Generators This paper presents a low-profile and autonomous piezoelectric energy harvesting system consisting of an extraction rectifier and a maximum power point tracking (MPPT) circuit for powering portable electronics. Synchronized switch harvesting on capacitor-inductor (SSHCI) technique with its unique two-step voltage flipping process is utilized to downsize the ponderous external inductor and extend application areas of such harvesting systems. SSHCI implementation with small flipping inductor-capacitor combination enhances voltage flipping efficiency and accordingly attains power extraction improvements over conventional synchronized switch harvesting on inductor (SSHI) circuits utilizing bulky external components. A novel MPPT system provides robustness of operation against changing load and excitation conditions. Innovation in MPPT comes from the refresh unit, which continually monitors excitation conditions of piezoelectric harvester to detect any change in optimum storage voltage. Compared with conventional circuits, optimal flipping detection inspired from active diode structures eliminates the need for external adjustment, delivering autonomy to SSHCI. Inductor sharing between SSHCI and MPPT reduces the number of external components. The circuit is fabricated in 180 nm CMOS technology with 1.23 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> active area, and is tested with custom MEMS piezoelectric harvester at its resonance frequency of 415 Hz. It is capable of extracting 5.44x more power compared to ideal FBR, while using $100~\mu $ H inductor. Due to reduction of losses through low power design techniques, measured power conversion efficiency of 83% is achieved at 3.2 V piezoelectric open circuit voltage amplitude. Boosting of power generation capacity in a low profile is a significant contribution of the design.
An 18 nA, 87% Efficient Solar, Vibration and RF Energy-Harvesting Power Management System With a Single Shared Inductor. We present a modular power management system that can harvest energy from three sources simultaneously, with available power levels of 25 nW to 100 μW, with one inductor. The DC-DC converter is clocked with energize and dump pulses, and the pulse-widths are generated for constant peak inductor current and for no reversal, without current sensing. We use a comparator to reach the open-circuit-volta...
A Single-Inductor Triple-Source Quad-Mode Energy-Harvesting Interface With Automatic Source Selection and Reversely Polarized Energy Recycling. This paper presents a single-inductor triple-source quad-mode (SITSQM) energy-harvesting interface in a 0.18-μm CMOS process. The proposed reversely polarized energy recycling (RPER) technique improves not only the conversion efficiency at low input voltage but also the system&#39;s output power range. The interface employs the buck-boost topology to convert energy from photovoltaic (PV) cells and a t...
An Efficient Piezoelectric Energy Harvesting Interface Circuit Using a Sense-and-Set Rectifier Piezoelectric energy harvesters (PEHs) are widely deployed in many self-sustaining systems, and proper rectifier circuits can significantly improve the energy conversion efficiency and, thus, increase the harvested energy. Various active rectifiers have been proposed in the past decade, such as synchronized switch harvesting on inductor (SSHI) and synchronous electric charge extraction (SECE). This article presents a sense-and-set (SaS) rectifier that achieves maximum-power-point-tracking (MPPT) of PEHs and maintains optimal energy extraction for different input excitation levels and output voltages. The proposed circuit is fabricated in the 0.18- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{m}$ </tex-math></inline-formula> CMOS process with a 0.47-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area, a 230-nW active power, and a 7-nW leakage power. Measured with a commercial PEH device (Mide PPA-1022) at 85- and 60-Hz vibration frequency, the proposed circuit shows 512% and 541% power extraction improvement [figure of merit (FoM)] compared with an ideal full-bridge rectifier (FBR) for ON-resonance and OFF-resonance vibrations, respectively, while maintaining high efficiency across different input levels and PEH parameters.
A Switched Capacitor Multiple Input Single Output Energy Harvester (Solar + Piezo) Achieving 74.6% Efficiency With Simultaneous MPPT This paper presents an inductor-less switched capacitor based energy harvester, which can simultaneously harvest from 2 energy sources (Solar + Piezo). The proposed harvester employs maximum power point tracking algorithm, by changing the conversion ratios of the charge pumps for piezo and solar sources, and output voltage control by varying the switching frequency. The proposed MPPT algorithm can match the input impedance of two sources simultaneously. Implemented in 65nm CMOS, the proposed harvester can generate a fixed output between 1.8 and 2.5 V output while delivering 35 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> to 70 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> power with a peak power conversion efficiency of 74.6%.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
Planning as heuristic search In the AIPS98 Planning Contest, the hsp planner showed that heuristic search planners can be competitive with state-of-the-art Graphplan and sat planners. Heuristic search planners like hsp transform planning problems into problems of heuristic search by automatically extracting heuristics from Strips encodings. They differ from specialized problem solvers such as those developed for the 24-Puzzle and Rubik's Cube in that they use a general declarative language for stating problems and a general mechanism for extracting heuristics from these representations. In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are independent. The heuristic is then used in the context of best-first and hill-climbing search algorithms, and is tested over a large collection of domains. We then consider variations and extensions such as reversing the direction of the search for speeding node evaluation, and extracting information about propositional invariants for avoiding dead-ends. We analyze the resulting planners, evaluate their performance, and explain when they do best. We also compare the performance of these planners with two state-of-the-art planners, and show that the simplest planner based on a pure best-first search yields the most solid performance over a large set of problems. We also discuss the strengths and limitations of this approach, establish a correspondence between heuristic search planning and Graphplan, and briefly survey recent ideas that can reduce the current gap in performance between general heuristic search planners and specialized solvers.
Probabilistic neural networks By replacing the sigmoid activation function often used in neural networks with an exponential function, a probabilistic neural network (PNN) that can compute nonlinear decision boundaries which approach the Bayes optimal is formed. Alternate activation functions having similar properties are also discussed. A fourlayer neural network of the type proposed can map any input pattern to any number of classifications. The decision boundaries can be modified in real-time using new data as they become available, and can be implemented using artificial hardware “neurons” that operate entirely in parallel. Provision is also made for estimating the probability and reliability of a classification as well as making the decision. The technique offers a tremendous speed advantage for problems in which the incremental adaptation time of back propagation is a significant fraction of the total computation time. For one application, the PNN paradigm was 200,000 times faster than back-propagation.
TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones Today’s smartphone operating systems frequently fail to provide users with visibility into how third-party applications collect and share their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid enables realtime analysis by leveraging Android’s virtualized execution environment. TaintDroid incurs only 32&percnt; performance overhead on a CPU-bound microbenchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, in our 2010 study we found 20 applications potentially misused users’ private information; so did a similar fraction of the tested applications in our 2012 study. Monitoring the flow of privacy-sensitive data with TaintDroid provides valuable input for smartphone users and security service firms seeking to identify misbehaving applications.
On receding horizon feedback control Receding horizon feedback control (RHFC) was originally introduced as an easy method for designing stable state-feedback controllers for linear systems. Here those results are generalized to the control of nonlinear autonomous systems, and we develop a performance index which is minimized by the RHFC (inverse optimal control problem). Previous results for linear systems have shown that desirable nonlinear controllers can be developed by making the RHFC horizon distance a function of the state. That functional dependence was implicit and difficult to implement on-line. Here we develop similar controllers for which the horizon distance is an easily computed explicit function of the state.
A MIMO decoder accelerator for next generation wireless communications In this paper, we present a multi-input-multi-output (MIMO) decoder accelerator architecture that offers versatility and reprogrammability while maintaining a very high performance-cost metric. The accelerator is meant to address the MIMO decoding bottlenecks associated with the convergence of multiple high-speed wireless standards onto a single device. It is scalable in the number of antennas, bandwidth, modulation format, and most importantly, present and emerging decoder algorithms. It features a Harvard-like architecture with complex vector operands and a deeply pipelined fixed-point complex arithmetic processing unit. When implemented on a Xilinx Virtex-4 LX200FF1513 field-programmable gate array (FPGA), the design occupied 43% of overall FPGA resources. The accelerator shows an advantage of up to three orders of magnitude (1000 times) in power-delay product for typical MIMO decoding operations relative to a general purpose DSP. When compared to dedicated application-specific IC (ASIC) implementations of mmse MIMO decoders, the accelerator showed a degradation of 340%-17%, depending on the actual ASIC being considered. In order to optimize the design for both speed and area, specific challenges had to be overcome. These include: definition of the processing units and their interconnection; proper dynamic scaling of the signal; and memory partitioning and parallelism.
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
The sliding DFT The sliding DFT process for spectrum analysis was presented and shown to be more efficient than the popular Goertzel (1958) algorithm for sample-by-sample DFT bin computations. The sliding DFT provides computational advantages over the traditional DFT or FFT for many applications requiring successive output calculations, especially when only a subset of the DFT output bins are required. Methods for output stabilization as well as time-domain data windowing by means of frequency-domain convolution were also discussed. A modified sliding DFT algorithm, called the sliding Goertzel DFT, was proposed to further reduce the computational workload. We start our sliding DFT discussion by providing a review of the Goertzel algorithm and use its behavior as a yardstick to evaluate the performance of the sliding DFT technique. We examine stability issues regarding the sliding DFT implementation as well as review the process of frequency-domain convolution to accomplish time-domain windowing. Finally, a modified sliding DFT structure is proposed that provides improved computational efficiency.
Derivative-free optimization: a review of algorithms and comparison of software implementations. This paper addresses the solution of bound-constrained optimization problems using algorithms that require only the availability of objective function values but no derivative information. We refer to these algorithms as derivative-free algorithms. Fueled by a growing number of applications in science and engineering, the development of derivative-free optimization algorithms has long been studied, and it has found renewed interest in recent time. Along with many derivative-free algorithms, many software implementations have also appeared. The paper presents a review of derivative-free algorithms, followed by a systematic comparison of 22 related implementations using a test set of 502 problems. The test bed includes convex and nonconvex problems, smooth as well as nonsmooth problems. The algorithms were tested under the same conditions and ranked under several criteria, including their ability to find near-global solutions for nonconvex problems, improve a given starting point, and refine a near-optimal solution. A total of 112,448 problem instances were solved. We find that the ability of all these solvers to obtain good solutions diminishes with increasing problem size. For the problems used in this study, TOMLAB/MULTIMIN, TOMLAB/GLCCLUSTER, MCS and TOMLAB/LGO are better, on average, than other derivative-free solvers in terms of solution quality within 2,500 function evaluations. These global solvers outperform local solvers even for convex problems. Finally, TOMLAB/OQNLP, NEWUOA, and TOMLAB/MULTIMIN show superior performance in terms of refining a near-optimal solution.
A 14 bit 200 MS/s DAC With SFDR > 78 dBc, IM3 < - 83 dBc and NSD < - 163 dBm/Hz Across the Whole Nyquist Band Enabled by Dynamic-Mismatch Mapping. This paper presents a 14 bit 200 MS/s current-steering DAC with a novel digital calibration technique called dynamic-mismatch mapping (DMM). By optimizing the switching sequence of current cells to reduce the dynamic integral nonlinearity in an I-Q domain, the DMM technique digitally calibrates all mismatch errors so that both the DAC static and dynamic performance can be significantly improved in...
Modeling Timing Jitter Effects in Digital-to-Analog Converters Digital-to-analog converters (DACs) whose time base is affected by deterministic jitter are dealt with. A new model is particularly proposed, which is capable of describing how the desired spectral features of the analog waveform at the output of a DAC are distorted by the presence of sinusoidal timing jitter. In addition to a detailed description of the model, very practical and usable relations are given to make its use very straightforward. Efficacy and reliability of the model are assessed through several experiments on an actual DAC. Jitter effects predicted by the model are compared with those measured on the analog output signals. Special attention is paid to sinusoidal signals, which represent a key stimulus in a variety of application fields. Some tests are also conducted on more complex signals, such as digitally modulated signals peculiar to modern communication systems.
A 65-nm CMOS 6-Bit 20 GS/s Time-Interleaved DAC With Full-Binary Sub-DACs. A 6-bit 20 GS/s two-channel time-interleaved current-steering digital-to-analog converter (DAC) with compact full-binary sub-DACs is presented. Optimally adjusted transition timings between the input data and the interleaving clock minimize glitches by the time-interleaving switches and enhance the high-frequency linearity. In order to prevent static linearity degradation by the leakage current th...
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
Planning as heuristic search In the AIPS98 Planning Contest, the hsp planner showed that heuristic search planners can be competitive with state-of-the-art Graphplan and sat planners. Heuristic search planners like hsp transform planning problems into problems of heuristic search by automatically extracting heuristics from Strips encodings. They differ from specialized problem solvers such as those developed for the 24-Puzzle and Rubik's Cube in that they use a general declarative language for stating problems and a general mechanism for extracting heuristics from these representations. In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are independent. The heuristic is then used in the context of best-first and hill-climbing search algorithms, and is tested over a large collection of domains. We then consider variations and extensions such as reversing the direction of the search for speeding node evaluation, and extracting information about propositional invariants for avoiding dead-ends. We analyze the resulting planners, evaluate their performance, and explain when they do best. We also compare the performance of these planners with two state-of-the-art planners, and show that the simplest planner based on a pure best-first search yields the most solid performance over a large set of problems. We also discuss the strengths and limitations of this approach, establish a correspondence between heuristic search planning and Graphplan, and briefly survey recent ideas that can reduce the current gap in performance between general heuristic search planners and specialized solvers.
Towards a Common API for Structured Peer-to-Peer Overlays In this paper, we describe an ongoing effort to define common APIs for structured peer-to-peer overlays and the key abstractions that can be built on them. In doing so, we hope to facilitate independent innovation in overlay protocols, services, and applications, to allow direct experimental comparisons, and to encourage application development by third parties. We provide a snapshot of our efforts and discuss open problems in an effort to solicit feedback from the research community.
Towards a higher-order synchronous data-flow language The paper introduces a higher-order synchronous data-flow language in which communication channels may themselves transport programs. This provides a mean to dynamically reconfigure data-flow processes. The language comes as a natural and strict extension of both lustre and lucy. This extension is conservative, in the sense that a first-order restriction of the language can receive the same semantics.We illustrate the expressivity of the language with some examples, before giving the formal semantics of the underlying calculus. The language is equipped with a polymorphic type system allowing types to be automatically inferred and a clock calculus rejecting programs for which synchronous execution cannot be statically guaranteed. To our knowledge, this is the first higher-order synchronous data-flow language where stream functions are first class citizens.
An almost necessary and sufficient condition for robust stability of closed-loop systems with disturbance observer The disturbance observer (DOB)-based controller has been widely employed in industrial applications due to its powerful ability to reject disturbances and compensate plant uncertainties. In spite of various successful applications, no necessary and sufficient condition for robust stability of the closed loop systems with the DOB has been reported in the literature. In this paper, we present an almost necessary and sufficient condition for robust stability when the Q-filter has a sufficiently small time constant. The proposed condition indicates that robust stabilization can be achieved against arbitrarily large (but bounded) uncertain parameters, provided that an outer-loop controller stabilizes the nominal system, and uncertain plant is of minimum phase.
Practical Timing Side Channel Attacks against Kernel Space ASLR Due to the prevalence of control-flow hijacking attacks, a wide variety of defense methods to protect both user space and kernel space code have been developed in the past years. A few examples that have received widespread adoption include stack canaries, non-executable memory, and Address Space Layout Randomization (ASLR). When implemented correctly (i.e., a given system fully supports these protection methods and no information leak exists), the attack surface is significantly reduced and typical exploitation strategies are severely thwarted. All modern desktop and server operating systems support these techniques and ASLR has also been added to different mobile operating systems recently. In this paper, we study the limitations of kernel space ASLR against a local attacker with restricted privileges. We show that an adversary can implement a generic side channel attack against the memory management system to deduce information about the privileged address space layout. Our approach is based on the intrinsic property that the different caches are shared resources on computer systems. We introduce three implementations of our methodology and show that our attacks are feasible on four different x86-based CPUs (both 32- and 64-bit architectures) and also applicable to virtual machines. As a result, we can successfully circumvent kernel space ASLR on current operating systems. Furthermore, we also discuss mitigation strategies against our attacks, and propose and implement a defense solution with negligible performance overhead.
ΣΔ ADC with fractional sample rate conversion for software defined radio receiver.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
Improved Pulse Regulation Control Technique for Switching DC–DC Converters Operating in DCM Improved pulse regulation (IPR) control, a novel control technique for switching dc-dc converters, is proposed and studied in this paper. According to the output voltage and load current of the switching dc-dc converter, IPR control technique achieves output voltage regulation by generating control pulse train made up of some preset control pulses with different duty ratios. IPR control needs only comparators, triggers, and some simple logic devices, without error amplifier and the corresponding compensation circuit of pulse width modulation control scheme, thus IPR control scheme is easy to realize, benefits with excellent transient performance and stability. The principle and operation of IPR control scheme are introduced and illustrated with buck converter operating in discontinuous conduction mode as an example. Simulation and experimental results are presented to show that IPR-controlled converter has much lower output voltage ripple and more accurate output voltage regulation than PR converter.
Mode-Selectable High-Efficiency Low-Quiescent-Current Synchronous Buck DC–DC Converter In this paper, a mode-selectable synchronous buck DC-DC converter with high efficiency and low quiescent current is proposed, which is suitable particularly for use as an Li-ion battery charger. The high efficiency is obtained by applying dynamic power management technology under light load, which makes some modules of the chip enter into sleep state and the quiescent current of the whole chip down to 45 μA. At the same time, power metal-oxide semiconductor (MOS) devices are also shut down to decrease the dissipation of the system. A simple loop compensation method is also proposed, which can eliminate the influence brought by the high equivalent resistance of the output's capacitor on the stability of the system loop. The converter has been made with a 0.5- μm complementary MOS process. Experimental results show that the peak efficiency is 94% at an output current of 100 mA when the supply voltage is 2.7 V. Moreover, the output voltage can recover within 14 μs at 400-mA load step.
Subharmonic Analysis for Buck Converters With Constant On-Time Control and Ramp Compensation This paper presents a new subharmonic analysis for buck converters with constant on-time control and ramp compensation. For constant on-time control, subharmonic oscillation can be eliminated by adding a compensation ramp with a fixed slope during the off time and a fixed level during the on time. Based on the inductor current information, the compensation ramp, and the charge variations of the output capacitor, the minimum amount of compensation ramp to avoid subharmonic is derived, and the effect of circuit propagation delay is quantified. A prototype of buck converter is built by using constant on-time control with ramp compensation. Experimental results demonstrate the detailed theoretical analysis.
A Current-Mode Hysteretic Buck Converter with Multiple-Reset RC-Based Inductor Current Sensor A current-mode hysteretic buck converter is described in which inductor current is sensed by a resistor-capacitor (RC) network. The inductor current sensing RC network is reset multiple times per switching clock period, which allows it to have time constant much smaller than switching clock period and therefore occupy small silicon area. A frequency regulator ensures the switching frequency to be constant regardless of the operating condition of the buck converter and also integrates comparator delay compensator as its loop filter. The current-mode hysteretic buck converter implemented in a 65 nm complementary metal-oxide-semiconductor (CMOS) process provides 0.6–2.0 V output from 3.3 V input with 1 MHz switching frequency. The maximum load current is 1.5 A and the measured peak power efficiency is 96.3%.
An integrated CMOS current-sensing circuit for low-Voltage current-mode buck regulator. An integrated current-sensing circuit for low-voltage buck regulator is presented. The minimum achievable supply voltage of the proposed current-sensing circuit is 1.2 V implemented in a CMOS technology with V/sub TH/=0.85 V, and the current-sensing accuracy is higher than 94%. With the developed current-sensing circuit, a buck regulator, which is able to operate at a 1.2-V supply, is implemented....
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph (Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445-452; Hen- drickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993). From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to con- sistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan-Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear pro- gramming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.
Controllability and observability of Boolean control networks The controllability and observability of Boolean control networks are investigated. After a brief review on converting a logic dynamics to a discrete-time linear dynamics with a transition matrix, some formulas are obtained for retrieving network and its logical dynamic equations from this network transition matrix. Based on the discrete-time dynamics, the controllability via two kinds of inputs is revealed by providing the corresponding reachable sets precisely. Then the problem of observability is also solved by giving necessary and sufficient conditions.
From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render驴or synthesize驴images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses $\times$ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
A world survey of artificial brain projects, Part I: Large-scale brain simulations Driven by rapid ongoing advances in computer hardware, neuroscience and computer science, Artificial Brain research and development are blossoming. This article constitutes the first half of a two-part world survey of artificial brain projects: this part dealing with large-scale brain simulations, and the second part with biologically inspired cognitive architectures (BICAs). The large-scale brain simulations we consider in depth here include those by Markram, Modha, Boahen, Horwitz, Edelman, Izhikevich, and Just. As well as reviewing the particulars of these simulation projects, we position them in a broader perspective, comparing at the different underlying definitions of the concept of ''simulation,'' noting that in many ways the projects are modeling neurosystems at different levels as well as using different methodologies.
H∞ control for sampled-data nonlinear systems described by Takagi–Sugeno fuzzy systems In this paper we consider the design problem of output feedback H∞ controllers for sampled-data fuzzy systems. We first transfer them into equivalent jump fuzzy systems. We establish the so-called Bounded Real Lemma for jump fuzzy systems and give a design method of γ-suboptimal output feedback H∞ controllers in terms of two Riccati inequalities with jumps. We then apply the main results to the sampled-data fuzzy systems and obtain a design method of γ-suboptimal output feedback H∞ controllers. We give a numerical example and construct a γ-suboptimal output feedback H∞ controller.
Digital signal processors in cellular radio communications Contemporary wireless communications are based on digital communications technologies. The recent commercial success of mobile cellular communications has been enabled in part by successful designs of digital signal processors with appropriate on-chip memories and specialized accelerators for digital transceiver operations. This article provides an overview of fixed point digital signal processors and ways in which they are used in cellular communications. Directions for future wireless-focused DSP technology developments are discussed
Kinesis: a security incident response and prevention system for wireless sensor networks This paper presents Kinesis, a security incident response and prevention system for wireless sensor networks, designed to keep the network functional despite anomalies or attacks and to recover from attacks without significant interruption. Due to the deployment of sensor networks in various critical infrastructures, the applications often impose stringent requirements on data reliability and service availability. Given the failure- and attack-prone nature of sensor networks, it is a pressing concern to enable the sensor networks provide continuous and unobtrusive services. Kinesis is quick and effective in response to incidents, distributed in nature, and dynamic in selecting response actions based on the context. It is lightweight in terms of response policy specification, and communication and energy overhead. A per-node single timer based distributed strategy to select the most effective response executor in a neighborhood makes the system simple and scalable, while achieving proper load distribution and redundant action optimization. We implement Kinesis in TinyOS and measure its performance for various application and network layer incidents. Extensive TOSSIM simulations and testbed experiments show that Kinesis successfully counteracts anomalies/attacks and behaves consistently under various attack scenarios and rates.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.01
0
0
0
0
0
0
0
0
0
Combining Genetic Programming And Model-Driven Development Genetic programming (GP) is known to provide good solutions for many problems like the evolution of network protocols and distributed algorithms. In most cases it is a hardwired module of a design framework assisting the engineer in optimizing specific aspects in system development. In this article, we show how the utility of GP can be increased remarkably by isolating it as a component and integrating it into the modeldriven software development process. Our GP framework produces XMI-encoded UML models that can easily be loaded into widely available modeling tools, which in turn offer code generation as well as additional analysis and test capabilities. We use the evolution of a distributed election algorithm as an example to illustrate how GP can be combined with model-driven development (MDD).
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Quick detection of difficult bugs for effective post-silicon validation We present a new technique for systematically creating postsilicon validation tests that quickly detect bugs in processor cores and uncore components (cache controllers, memory controllers, on-chip networks) of multi-core System on Chips (SoCs). Such quick detection is essential because long error detection latency, the time elapsed between the occurrence of an error due to a bug and its manifestation as an observable failure, severely limits the effectiveness of existing post-silicon validation approaches. In addition, we provide a list of realistic bug scenarios abstracted from “difficult” bugs that occurred in commercial multi-core SoCs. Our results for an OpenSPARC T2-like multi-core SoC demonstrate: 1. Error detection latencies of “typical” post-silicon validation tests can be very long, up to billions of clock cycles, especially for bugs in uncore components. 2. Our new technique shortens error detection latencies by several orders of magnitude to only a few hundred cycles for most bug scenarios. 3. Our new technique enables 2-fold increase in bug coverage. An important feature of our technique is its software-only implementation without any hardware modification. Hence, it is readily applicable to existing designs.
Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Estimating and sampling graphs with multidimensional random walks Estimating characteristics of large graphs via sampling is a vital part of the study of complex networks. Current sampling methods such as (independent) random vertex and random walks are useful but have drawbacks. Random vertex sampling may require too many resources (time, bandwidth, or money). Random walks, which normally require fewer resources per sample, can suffer from large estimation errors in the presence of disconnected or loosely connected graphs. In this work we propose a new m-dimensional random walk that uses m dependent random walkers. We show that the proposed sampling method, which we call Frontier sampling, exhibits all of the nice sampling properties of a regular random walk. At the same time, our simulations over large real world graphs show that, in the presence of disconnected or loosely connected components, Frontier sampling exhibits lower estimation errors than regular random walks. We also show that Frontier sampling is more suitable than random vertex sampling to sample the tail of the degree distribution of the graph.
On the time-complexity of broadcast in multi-hop radio networks: an exponential gap between determinism and randomization The time-complexity of deterministic and randomized protocols for achieving broadcast (distributing a message from a source to all other nodes) in arbitrary multi-hop radio networks is investigated. In many such networks, communication takes place in synchronous time-slots. A processor receives a message at a certain time-slot if exactly one of its neighbors transmits at that time-slot. We assume no collision-detection mechanism; i.e., it is not always possible to distinguish the case where no neighbor transmits from the case where several neighbors transmit simultaneously. We present a randomized protocol that achieves broadcast in time which is optimal up to a logarithmic factor. In particular, with probability 1 --E, the protocol achieves broadcast within O((D + log n/s) 'log n) time-slots, where n is the number of processors in the network and D its diameter. On the other hand, we prove a linear lower bound on the deterministic time-complexity of broadcast in this model. Namely, we show that any deterministic broadcast protocol requires 8(n) time-slots, even if the network has diameter 3, and n is known to all processors. These two results demonstrate an exponential gap in complexity between randomization and determinism.
Spurious Tone Suppression Techniques Applied to a Wide-Bandwidth 2.4 GHz Fractional- N PLL This paper demonstrates that spurious tones in the output of a fractional-N PLL can be reduced by replacing the DeltaSigma modulator with a new type of digital quantizer and adding a charge pump offset combined with a sampled loop filter. It describes the underlying mechanisms of the spurious tones, proposes techniques that mitigate the effects of the mechanisms, and presents a phase noise cancell...
A 0.5 V 1.1 MS/sec 6.3 fJ/Conversion-Step SAR-ADC With Tri-Level Comparator in 40 nm CMOS This paper presents an extremely low-voltage operation and power efficient successive-approximation-register (SAR) analog-to-digital converter (ADC). Tri-level comparator is proposed to relax the speed requirement of the comparator and decrease the resolution of internal Digital-to-Analog Converter (DAC) by 1-bit. The internal charge redistribution DAC employs unit capacitance of 0.5 fF and ADC operates at nearly thermal noise limitation. To deal with the problem of capacitor mismatch, reconfigurable capacitor array and calibration procedure were developed. The prototype ADC fabricated using 40 nm CMOS process achieves 46.8 dB SNDR and 58.2 dB SFDR with 1.1 MS/sec at 0.5 V power supply. The FoM is 6.3-fJ/conversion step and the chip die area is only 160 μm × 70 μm.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.028571
0.008333
0
0
0
0
0
0
0
0
0
0
0
A decentralized modular control framework for robust control of FES-activated walker-assisted paraplegic walking using terminal sliding mode and fuzzy logic control. A major challenge to developing functional electrical stimulation (FES) systems for paraplegic walking and widespread acceptance of these systems is the design of a robust control strategy that provides satisfactory tracking performance. The systems need to be robust against time-varying properties of neuromusculoskeletal dynamics, day-to-day variations, subject-to-subject variations, external dis...
Multiobjective evolutionary algorithms: A survey of the state of the art A multiobjective optimization problem involves several conflicting objectives and has a set of Pareto optimal solutions. By evolving a population of solutions, multiobjective evolutionary algorithms (MOEAs) are able to approximate the Pareto optimal set in a single run. MOEAs have attracted a lot of research effort during the last 20 years, and they are still one of the hottest research areas in the field of evolutionary computation. This paper surveys the development of MOEAs primarily during the last eight years. It covers algorithmic frameworks such as decomposition-based MOEAs (MOEA/Ds), memetic MOEAs, coevolutionary MOEAs, selection and offspring reproduction operators, MOEAs with specific search methods, MOEAs for multimodal problems, constraint handling and MOEAs, computationally expensive multiobjective optimization problems (MOPs), dynamic MOPs, noisy MOPs, combinatorial and discrete MOPs, benchmark problems, performance indicators, and applications. In addition, some future research issues are also presented.
Optimal Tracking Control of Motion Systems Tracking control of motion systems typically requires accurate nonlinear friction models, especially at low speeds, and integral action. However, building accurate nonlinear friction models is time consuming, friction characteristics dramatically change over time, and special care must be taken to avoid windup in a controller employing integral action. In this paper a new approach is proposed for the optimal tracking control of motion systems with significant disturbances, parameter variations, and unmodeled dynamics. The ‘desired’ control signal that will keep the nominal system on the desired trajectory is calculated based on the known system dynamics and is utilized in a performance index to design an optimal controller. However, in the presence of disturbances, parameter variations, and unmodeled dynamics, the desired control signal must be adjusted. This is accomplished by using neural network based observers to identify these quantities, and update the control signal on-line. This formulation allows for excellent motion tracking without the need for the addition of an integral state. The system stability is analyzed and Lyapunov based weight update rules are applied to the neural networks to guarantee the boundedness of the tracking error, disturbance estimation error, and neural network weight errors. Experiments are conducted on the linear axes of a mini CNC machine for the contour control of two orthogonal axes, and the results demonstrate the excellent performance of the proposed methodology.
Convex Combination Filtered-X Algorithms for Active Noise Control Systems daptive filtering schemes exhibit a compromise between convergence speed and steady-state mean square error. Trying to overcome this trade-off, convex combination of adaptive filters have been recently developed for system identification achieving better performance than traditional approaches. The purpose of this work is to apply the convex combination strategy to single-channel and multichannel active noise control systems. In these systems it is necessary to take into account the secondary path between the adaptive filter output and the error sensor and the possible unavailability of the disturbance signal, which depends on the filtering scheme considered. Even though this strategy involves a higher computational burden than the classic adaptive filters, it exhibits a good performance in terms of convergence speed and steady-state mean square error.
Multicriteria adaptive differential evolution for global numerical optimization Differential evolution DE has become a prevalent tool for global optimization problems since it was proposed in 1995. As usual, when applying DE to a specific problem, determining the most proper strategy and its associated parameter values is time-consuming. Moreover, to achieve good performance, DE often requires different strategies combined with different parameter values at different evolution stages. Thus integrating several strategies in one algorithm and determining the application rate of each strategy as well as its associated parameter values online become an ad-hoc research topic. This paper proposes a novel DE algorithm, called multicriteria adaptive DE MADE, for global numerical optimization. In MADE, a multicriteria adaptation scheme is introduced to determine the trial vector generation strategies and the control parameters of each strategy are separately adjusted according to their most recently successful values. In the multicriteria adaptation scheme, the impacts of an operator application are measured in terms of exploitation and exploration capabilities and correspondingly a multi-objective decision procedure is introduced to aggregate the impacts. Thirty-eight scale numerical optimization problems with various characteristics and two real-world problems are applied to test the proposed idea. Results show that MADE is superior or competitive to six well-known DE variants in terms of solution quality and convergence performance.
Vibration Control With MEMS Electrostatic Drives: A Self-Sensing Approach Nanopositioning is the actuation and sensing of motion on the nanometer scale and recent nanopositioner designs have been utilizing microelectromechanical systems (MEMS). This brief demonstrates a simple method to implement vibration control on a MEMS nanopositioner. The actuation and sensing of the system are performed with a MEMS electrostatic drive. The electrostatic drive is arranged to be self-sensing, that is, the drive’s voltage is used to actuate the system and the drive’s current is used to observe the system. With this arrangement, the current is proportional to velocity at the resonance frequency and velocity feedback is used to damp the nanopositioner. To filter the current signal and recover a displacement signal, a charge measurement may be preferred to a current measurement. The self-sensing arrangement was modified to be a charge sensor and resonant control was applied to damp the nanopositioner. With this arrangement, the gain at the resonance frequency was attenuated by 18.45 dB.
Adaptive Cooperative Output Regulation for a Class of Nonlinear Multi-Agent Systems In this technical note, an adaptive cooperative output regulation problem for a class of nonlinear multi-agent systems is considered. The cooperative output regulation problem is first converted into an adaptive stabilization problem for an augmented multi-agent system. A distributed adaptive control law with adoption of Nussbaum gain technique is then proposed to globally stabilize this augmented system. This control scheme is designed such that, in the presence of unknown control direction and large parameter variations in each agent, the closed-loop system maintains global stability and the output of each agent tracks a class of prescribed signals asymptotically.
Self-constructing wavelet neural network algorithm for nonlinear control of large structures An adaptive control algorithm is presented for nonlinear vibration control of large structures subjected to dynamic loading. It is based on integration of a self-constructing wavelet neural network (SCWNN) developed specifically for structural system identification with an adaptive fuzzy sliding mode control approach. The algorithm is particularly suitable when the physical properties such as the stiffnesses and damping ratios of the structural system are unknown or partially known which is the case when a structure is subjected to an extreme dynamic event such as an earthquake as the structural properties change during the event. SCWNN is developed for functional approximation of the nonlinear behavior of large structures using neural networks and wavelets. In contrast to earlier work, the identification and control are processed simultaneously which makes the resulting adaptive control more applicable to real life situations. A two-part growing and pruning criterion is developed to construct the hidden layer in the neural network automatically. A fuzzy compensation controller is developed to reduce the chattering phenomenon. The robustness of the proposed algorithm is achieved by deriving a set of adaptive laws for determining the unknown parameters of wavelet neural networks using two Lyapunov functions. No offline training of neural network is necessary for the system identification process. In addition, the earthquake signals are considered as unidentified. This is particularly important for on-line vibration control of large civil structures since the external dynamic loading due to earthquake is not available in advance. The model is applied to vibration control of a continuous cast-in-place prestressed concrete box-girder bridge benchmark problem seismically excited highway.
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Local and global properties in networks of processors (Extended Abstract) This paper attempts to get at some of the fundamental properties of distributed computing by means of the following question: “How much does each processor in a network of processors need to know about its own identity, the identities of other processors, and the underlying connection network in order for the network to be able to carry out useful functions?” The approach we take is to require that the processors be designed without any knowledge (or only very broad knowledge) of the networks they are to be used in, and furthermore, that all processors with the same number of communication ports be identical. Given a particular network function, e.g., setting up a spanning tree, we ask whether processors may be designed so that when they are embedded in any connected network and started in some initial configuration, they are guaranteed to accomplish the desired function.
Mdvm System Concept, Paging Latency And Round-2 Randomized Leader Election Algorithm In Sg The future trend in the computing paradigm is marked by mobile computing based on mobile-client/server architecture connected by wireless communication network. However, the mobile computing systems have limitations because of the resource-thin mobile clients operating on battery power. The MDVM system allows the mobile clients to utilize memory and CPU resources of Server-Groups (SG) to overcome the resource limitations of clients in order to support the high-end mobile applications such as, m-commerce and virtual organization (VO). In this paper the concept ofMDVM system and the architecture of cellular network containing the SG are discussed. A round-2 randomized distributed algorithm is proposed to elect a unique leader and co-leader of the SG. The algorithm is free from any assumption about network topology, buffer space limitations and is based on dynamically elected coordinators eliminating single point of failure. The algorithm is implemented in distributed system setup and the network-paging latency values of wired and wireless networks are measured experimentally. The experimental results demonstrate that in most cases the algorithm successfully terminates in first round and the possibility of second round execution decreases significantly with the increase in the size of SG (vertical bar N-a vertical bar). The overall message complexity of the algorithm is O(vertical bar N-a vertical bar). The comparative study of network-paging latencies indicates that 3G/4G mobile communication systems would support the realization of MDVM system.
Sequential approximation of feasible parameter sets for identification with set membership uncertainty In this paper the problem of approximating the feasible parameter set for identification of a system in a set membership setting is considered. The system model is linear in the unknown parameters. A recursive procedure providing an approximation of the parameter set of interest through parallelotopes is presented, and an efficient algorithm is proposed. Its computational complexity is similar to that of the commonly used ellipsoidal approximation schemes. Numerical results are also reported on some simulation experiments conducted to assess the performance of the proposed algorithm.
A 10-Bit 800-MHz 19-mW CMOS ADC A pipelined ADC employs charge-steering op amps to relax the trade-offs among speed, noise, and power consumption. Applying full-rate nonlinearity and gain error calibration, a prototype realized in 65-nm CMOS technology achieves an SNDR of 52.2 dB at an input frequency of 399.2MHz and an FoM of 53 fJ/conversion-step.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.213333
0.213333
0.213333
0.213333
0.213333
0.213333
0.213333
0.06
0
0
0
0
0
0
Compressive Sensing [Lecture Notes] This lecture note presents a new method to capture and represent compressible signals at a rate significantly below the Nyquist rate. This method, called compressive sensing, employs nonadaptive linear projections that preserve the structure of the signal; the signal is then reconstructed from these projections using an optimization process.
Theory and Implementation of an Analog-to-Information Converter using Random Demodulation The new theory of compressive sensing enables direct analog-to-information conversion of compressible signals at sub-Nyquist acquisition rates. The authors develop new theory, algorithms, performance bounds, and a prototype implementation for an analog-to-information converter based on random demodulation. The architecture is particularly apropos for wideband signals that are sparse in the time-frequency plane. End-to-end simulations of a complete transistor-level implementation prove the concept under the effect of circuit nonidealities.
Ultra-High Input Impedance, Low Noise Integrated Amplifier for Noncontact Biopotential Sensing Noncontact electrocardiogram/electroencephalogram/electromyogram electrodes, which operate primarily through capacitive coupling, have been extensively studied for unobtrusive physiological monitoring. Previous implementations using discrete off-the-shelf amplifiers have been encumbered by the need for manually tuned input capacitance neutralization networks and complex dc-biasing schemes. We have designed and fabricated a custom integrated noncontact sensor front-end amplifier that fully bootstraps internal and external parasitic impedances. DC stability without the need for external large valued resistances is ensured by an ac bootstrapped, low-leakage, on-chip biasing network. The amplifier achieves, without neutralization, input impedance of 60 fF 50 T , input referred noise of 0.05 fA/ and 200 nV/ at 1 Hz, and power consumption of 1.5 A per channel at 3.3 V supply voltage. Stable frequency response is demonstrated below 0.05 Hz with electrode coupling capacitances as low as 0.5 pF.
A high input impedance low-noise instrumentaion amplifier with JFET input This paper presents a high input impedance instrumentation amplifier with low-noise low-power operation. JFET input-pair is employed instead of CMOS to significantly reduce the flicker noise. This amplifier features high input impedance (15.3 GΩ∥1.39 pF) by using current feedback technique and JFET input. This amplifier has a mid-band gain of 39.9 dB, and draws 3.65 μA from a 2.8-V supply and exhibits an input-referred noise of 3.81 μVrms integrated from 10 mHz to 100 kHz, corresponding to a noise efficiency factor (NEF) of 3.23.
A 0.5–1.1-V Adaptive Bypassing SAR ADC Utilizing the Oscillation-Cycle Information of a VCO-Based Comparator A successive approximation register (SAR) analog-to-digital converter (ADC) with a voltage-controlled oscillator (VCO)-based comparator is presented in this paper. The relationship between the input voltage and the number of oscillation cycles (NOC) to reach a VCO-comparator decision is explored, implying an inherent coarse quantization in parallel with the normal comparison. The NOC as a design parameter is introduced and analyzed with noise, metastability, and tradeoff considerations. The NOC is exploited to bypass a certain number of SAR cycles for higher power efficiency of VCO-based SAR ADCs. To cope with the process, voltage, and temperature (PVT) variations, an adaptive bypassing technique is proposed, tracking and correcting window sizes in the background. Fabricated in a 40-nm CMOS process, the ADC achieves a peak effective number of bits of 9.71 b at 10 MS/s. Walden figure of merit (FoM) of 2.4–6.85 fJ/conv.-step is obtained over a wide range of supply voltages and sampling rates. Measurement has been carried out under typical, fast-fast, and slow-slow process corners and 0 °C–100 °C temperature range, showing that the proposed ADC is robust over PVT variations without any off-chip calibration or tuning.
Neuropixels Data-Acquisition System: A Scalable Platform for Parallel Recording of 10,000+ Electrophysiological Signals. Although CMOS fabrication has enabled a quick evolution in the design of high-density neural probes and neural-recording chips, the scaling and miniaturization of the complete data-acquisition systems has happened at a slower pace. This is mainly due to the complexity and the many requirements that change depending on the specific experimental settings. In essence, the fundamental challenge of a n...
A Data-Compressive Wired-OR Readout for Massively Parallel Neural Recording. Neural interfaces of the future will be used to help restore lost sensory, motor, and other capabilities. However, realizing this futuristic promise requires a major leap forward in how electronic devices interface with the nervous system. Next generation neural interfaces must support parallel recording from tens of thousands of electrodes within the form factor and power budget of a fully implan...
A 15-Channel Digital Active Electrode System for Multi-Parameter Biopotential Measurement This paper presents a digital active electrode (DAE) system for multi-parameter biopotential signal acquisition in portable and wearable devices. It is built around an IC that performs analog signal processing and digitization with the help of on-chip instrumentation amplifiers, a 12 bit ADC and a digital interface. Via a standard ${rm I}^{{2}}{rm C}$ bus, up to 16 digital active electrodes (15-channels) can be connected to a commercially available microcontroller, thus significantly reducing system complexity and cost. In addition, the DAE utilizes an innovative functionally DC-coupled amplifier to preserve input DC signal, while still achieving state-of-the-art performance: 60 nV/sqrt(Hz) input-referred noise and ${pm} $350 mV electrode-offset tolerance. A common-mode feedforward scheme improves the CMRR of an AE pair from 40 dB to maximum 102 dB.
An Integrated Full-Wave CMOS Rectifier With Built-In Back Telemetry for RFID and Implantable Biomedical Applications This paper describes the design and implementation of an integrated full-wave standard CMOS rectifier with built-in passive back telemetry mechanism for radio frequency identification (RFID) and implantable biomedical device applications. The new rectifier eliminates the need for additional large switches for load modulation and provides more flexibility in choosing the most appropriate load shift keying (LSK) mechanism through shorting and/or opening the transponder coil for any certain application. The results are a more robust back telemetry link, improved read range, higher back telemetry data rate, reduced rectifier dropout voltage, and saving in chip area compared to the traditional topologies. A prototype version of the new rectifier is implemented in the AMI 0.5- mum n-well 3-metal 2-poly 5 V standard CMOS process, occupying ~ 0.25 mm2 of chip area. The prototype rectifier was powered through a wireless inductive link and proved to be fully functional in its three modes of operation: rectification, open coil (OC), and short coil (SC).
Network-based robust H∞ control of systems with uncertainty This paper is concerned with the design of robust H"~ controllers for uncertain networked control systems (NCSs) with the effects of both the network-induced delay and data dropout taken into consideration. A new analysis method for H"~ performance of NCSs is provided by introducing some slack matrix variables and employing the information of the lower bound of the network-induced delay. The designed H"~ controller is of memoryless type, which can be obtained by solving a set of linear matrix inequalities. Numerical examples and simulation results are given finally to illustrate the effectiveness of the method.
A 65 nm CMOS Quad-Band SAW-Less Receiver SoC for GSM/GPRS/EDGE. A quad-band 2.5G receiver is designed to replace the front-end SAW filters with on-chip bandpass filters and to integrate the LNA matching components, as well as the RF baluns. The receiver achieves a typical sensitivity of -110 dBm or better, while saving a considerable amount of BOM. Utilizing an arrangement of four baseband capacitors and MOS switches driven by 4-phase 25% duty-cycle clocks, high-Q BPF's are realized to attenuate the 0 dBm out-of-band blocker. The 65 nm CMOS SAW-less receiver integrated as a part of a 2.5G SoC, draws 55 mA from the battery, and measures an out-of-band 1 dB-compression of greater than +2 dBm. Measured as a stand-alone, as well as the baseband running in call mode in the platform level, the receiver passes the 3GPP specifications with margin.
Secure random number generation in wireless sensor networks Reliable random number generation is crucial for many available security algorithms, and some of the methods presented in literature proposed to generate them based on measurements collected from the physical environment, in order to ensure true randomness. However the effectiveness of such methods can be compromised if an attacker is able to gain access to the measurements thus inferring the generated random number. In our paper, we present an algorithm that guarantees security for the generation process, in a real world scenario using wireless sensor nodes as the sources of the physical measurements. The proposed method uses distributed leader election for selecting a random source of data. We prove the robustness of the algorithm by discussing common security attacks, and we present theoretical and experimental evaluation regarding its complexity in terms of time and exchanged messages.
A 40 V 10 W 93%-Efficiency Current-Accuracy-Enhanced Dimmable LED Driver With Adaptive Timing Difference Compensation for Solid-State Lighting Applications This paper presents a floating-buck dimmable LED driver for solid-state lighting applications. In the proposed driver, an adaptive timing difference compensation (ATDC) is developed to adaptively adjust the off-time of the low-side power switch to enable the driver to achieve high accuracy of the average LED current over a wide range of input voltages and number of output LED loads, fast settling time, and high operation frequency. The power efficiency benefits from the capabilities of using synchronous rectifier and having no sensing resistor in the power stage. The synchronous rectification under high input supply voltage is enabled by a proposed high-speed and low-power gate driver with pseudo-digital level shifters. Implemented in a 0.35 μm 50 V CMOS process, experimental results show that the proposed LED driver can operate at 1 MHz and achieve peak power efficiency of 93% to support a wide range of series-connected output LEDs from 1 to 10 and a wide input range from 10 to 40 V. The proposed LED driver has only 2.8% current error from the average LED current of 345 mA and settles within 8.5 μs after triggering the dimming condition, improving the settling time by 14 times compared with the state-of-the-art LED drivers.
Optimal Pricing of Public Electric Vehicle Charging Stations Considering Operations of Coupled Transportation and Power Systems Recognized as an efficient approach to reduce fossil fuel consumption and alleviate environment crisis, the adoption of electric vehicles (EVs) in urban transportation system is receiving more and more attention. EVs will tightly couple the operations of urban transportation network (UTN) and power distribution network (PDN), necessitating the interdependent traffic-power modeling to optimize the ...
1.070927
0.066667
0.066667
0.066667
0.066667
0.066667
0.033333
0.001186
0.000001
0
0
0
0
0
How to share a secret In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
Reliability Management for Blockchain-Based Decentralized Multi-Cloud Blockchain-based decentralized multi-cloud has the potential to reduce cloud infrastructure costs and to enable geographically distributed providers of any size to monetize their computational resources. In this context, guarantees that the computational results are delivered within the promised time and budget must be provided despite the limited information available about the location and ownership of resources. Providers might claim to execute the services to get compensated for the computation even though returning incomplete or incorrect results. In this paper, we define a model to predict provider reliability, that is, the probability of failure-free execution of computational tasks and correctness of the computed outputs, by extracting the potential dependencies between providers from historical log traces. This model can then be utilized in the definition of provider reputation or the scheduling of new services. Indeed, we propose a probabilistic scheduler that chooses the providers that meet the reliability constraints among others. Finally, we validate the proposed solutions with real traces from a decentralized cloud provider and hint at the benefits of predicting reliability in this context.
A System for Scalable Decentralized Random Number Generation Generating public randomness has been significantly demanding and also challenging, especially after the introduction of the Blockchain Technology. Lotteries, smart contracts, and random audits are examples where the reliability of the randomness source is a vital factor. We demonstrate a system of random number generation service for generating fair, tamper-resistant, and verifiable random numbers. Our protocol together with this system is an R&D project aiming at providing a decentralized solution to random number generation by leveraging the blockchain technology along with long-lasting cryptographic primitives including homomorphic encryption, verifiable random functions. The system decentralizes the process of generating random numbers by combining each party's favored value to obtain the final random numbers. Our novel idea is to force each party to encrypt his contribution before making it public. With the help of homomorphic encryption, all encrypted contribution can be combined without performing any decryption. The solution has achieved the properties of unpredictability, tamper-resistance, and public-verifiability. In addition, it only offers a linear overall complexity with respect to the number of parties on the network, which permits great scalability.
Ultimate Boundedness Control for Networked Singularly Perturbed Systems With Deception Attacks: A Markovian Communication Protocol Approach In this study, the ultimate boundedness controlfor a type of networked singularly perturbed systems (SPSs) with communication constraints and deception attacks is explored. To improve the observer performance, the measurement outputs are quantized with the aid of a logarithmic quantizer. Meanwhile, the Markovian communication protocol (MCP) is forwarded to schedule the transmission sequence of the...
Design and Analysis of a Leader Election Algorithm for Mobile Ad Hoc Networks Leader election is a very important problem, not only in wired networks, but in mobile, ad hoc networks as well. Existing solutions to leader election do not handle frequent topology changes and dynamic nature of mobile networks. In this paper, we present a leader election algorithm that is highly adaptive to arbitrary (possibly concurrent) topological changes and is therefore well-suited for use in mobile ad hoc networks. The algorithm is based on finding an extrema and uses diffusing computations for this purpose. We show, using linear-time temporal logic, that the algorithm is "weakly" self-stabilizing and terminating. We also simulate the algorithm in a mobile ad hoc setting. Through our simulation study, we elaborate on several important issues that can significantly impact performance of such a protocol for mobile ad hoc networks such as choice of signaling, broadcast nature of wireless medium etc. Our simulation study shows that our algorithm is quite effective in that each node has a leader approximately 97-99% of the time in a variety of operating conditions.
Prefetch Side-Channel Attacks: Bypassing SMAP and Kernel ASLR. Modern operating systems use hardware support to protect against control-flow hijacking attacks such as code-injection attacks. Typically, write access to executable pages is prevented and kernel mode execution is restricted to kernel code pages only. However, current CPUs provide no protection against code-reuse attacks like ROP. ASLR is used to prevent these attacks by making all addresses unpredictable for an attacker. Hence, the kernel security relies fundamentally on preventing access to address information. We introduce Prefetch Side-Channel Attacks, a new class of generic attacks exploiting major weaknesses in prefetch instructions. This allows unprivileged attackers to obtain address information and thus compromise the entire system by defeating SMAP, SMEP, and kernel ASLR. Prefetch can fetch inaccessible privileged memory into various caches on Intel x86. It also leaks the translation-level for virtual addresses on both Intel x86 and ARMv8-A. We build three attacks exploiting these properties. Our first attack retrieves an exact image of the full paging hierarchy of a process, defeating both user space and kernel space ASLR. Our second attack resolves virtual to physical addresses to bypass SMAP on 64-bit Linux systems, enabling ret2dir attacks. We demonstrate this from unprivileged user programs on Linux and inside Amazon EC2 virtual machines. Finally, we demonstrate how to defeat kernel ASLR on Windows 10, enabling ROP attacks on kernel and driver binary code. We propose a new form of strong kernel isolation to protect commodity systems incuring an overhead of only 0.06-5.09%.
Cross-VM side channels and their use to extract private keys This paper details the construction of an access-driven side-channel attack by which a malicious virtual machine (VM) extracts fine-grained information from a victim VM running on the same physical computer. This attack is the first such attack demonstrated on a symmetric multiprocessing system virtualized using a modern VMM (Xen). Such systems are very common today, ranging from desktops that use virtualization to sandbox application or OS compromises, to clouds that co-locate the workloads of mutually distrustful customers. Constructing such a side-channel requires overcoming challenges including core migration, numerous sources of channel noise, and the difficulty of preempting the victim with sufficient frequency to extract fine-grained information from it. This paper addresses these challenges and demonstrates the attack in a lab setting by extracting an ElGamal decryption key from a victim using the most recent version of the libgcrypt cryptographic library.
Control-flow integrity principles, implementations, and applications Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, control-flow integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: It is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.
Accelerating Dependent Cache Misses with an Enhanced Memory Controller. On-chip contention increases memory access latency for multicore processors. We identify that this additional latency has a substantial efect on performance for an important class of latency-critical memory operations: those that result in a cache miss and are dependent on data from a prior cache miss. We observe that the number of instructions between the frst cache miss and its dependent cache miss is usually small. To minimize dependent cache miss latency, we propose adding just enough functionality to dynamically identify these instructions at the core and migrate them to the memory controller for execution as soon as source data arrives from DRAM. This migration allows memory requests issued by our new Enhanced Memory Controller (EMC) to experience a 20% lower latency than if issued by the core. On a set of memory intensive quad-core workloads, the EMC results in a 13% improvement in system performance and a 5% reduction in energy consumption over a system with a Global History Bufer prefetcher, the highest performing prefetcher in our evaluation.
Disk Paxos We present an algorithm, called Disk Paxos, for implementing a reliable distributed system with a network of processors and disks. Like the original Paxos algorithm, Disk Paxos maintains consistency in the presence of arbitrary non-Byzantine faults. Progress can be guaranteed as long as a majority of the disks are available, even if all processors but one have failed.
Searching for black-hole faults in a network using multiple agents We consider a fixed communication network where (software) agents can move freely from node to node along the edges. A black hole is a faulty or malicious node in the network such that if an agent enters this node, then it immediately “dies.” We are interested in designing an efficient communication algorithm for the agents to identify all black holes. We assume that we have k agents starting from the same node s and knowing the topology of the whole network. The agents move through the network in synchronous steps and can communicate only when they meet in a node. At the end of the exploration of the network, at least one agent must survive and must know the exact locations of the black holes. If the network has n nodes and b black holes, then any exploration algorithm needs Ω(n/k + Db) steps in the worst-case, where Db is the worst case diameter of the network with at most b nodes deleted. We give a general algorithm which completes exploration in O((n/k)logn/loglogn + bDb) steps for arbitrary networks, if b≤k/2. In the case when b≤k/2, and , we give a refined algorithm which completes exploration in asymptotically optimal O(n/k) steps.
Nonlinear semidefinite programming: sensitivity, convergence, and an application in passive reduced-order modeling We consider the solution of nonlinear programs with nonlinear semidefiniteness constraints. The need for an efficient exploitation of the cone of positive semidefinite matrices makes the solution of such nonlinear semidefinite programs more complicated than the solution of standard nonlinear programs. This paper studies a sequential semidefinite programming (SSP) method, which is a generalization of the well-known sequential quadratic programming method for standard nonlinear programs. We present a sensitivity result for nonlinear semidefinite programs, and then based on this result, we give a self-contained proof of local quadratic convergence of the SSP method. We also describe a class of nonlinear semidefinite programs that arise in passive reduced-order modeling, and we report results of some numerical experiments with the SSP method applied to problems in that class.
GP-SIMD Processing-in-Memory GP-SIMD, a novel hybrid general-purpose SIMD computer architecture, resolves the issue of data synchronization by in-memory computing through combining data storage and massively parallel processing. GP-SIMD employs a two-dimensional access memory with modified SRAM storage cells and a bit-serial processing unit per each memory row. An analytic performance model of the GP-SIMD architecture is presented, comparing it to associative processor and to conventional SIMD architectures. Cycle-accurate simulation of four workloads supports the analytical comparison. Assuming a moderate die area, GP-SIMD architecture outperforms both the associative processor and conventional SIMD coprocessor architectures by almost an order of magnitude while consuming less power.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.06734
0.066667
0.066667
0.022222
0.000687
0.000208
0.000065
0.000007
0
0
0
0
0
0
A High-Voltage Low-Power DC-DC buck regulator for automotive applications This work presents a High-Voltage Low-Power CMOS DC-DC buck regulator for automotive applications. The overall system, including the high and low voltage analog devices, the power MOS and the low voltage digital devices, was realized in the Austriamicrosystems 0.35 HVCMOS technology, resulting in a 6.5 mm2 die. The regulator is able to manage a supply voltage down to 4.5 V and up to 50 V and generates a fixed regulated output voltage of 5 V or a variable one in the whole automotive temperature range. The regulator sinks only a maximum of 1.8 μA of current in standby mode and a maximum of 25 μA when no load is connected. It can be used to supply low voltage devices from the battery when low power dissipation and low current consumption is needed. The system output current can be selected in the range 350--700 mA. When a higher output current is needed, it is possible to connect more regulators in parallel multiplying the output current without any problem.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Tensaurus: A Versatile Accelerator for Mixed Sparse-Dense Tensor Computations Tensor factorizations are powerful tools in many machine learning and data analytics applications. Tensors are often sparse, which makes sparse tensor factorizations memory bound. In this work, we propose a hardware accelerator that can accelerate both dense and sparse tensor factorizations. We co-design the hardware and a sparse storage format, which allows accessing the sparse data in vectorized and streaming fashion and maximizes the utilization of the memory bandwidth. We extract a common computation pattern that is found in numerous matrix and tensor operations and implement it in the hardware. By designing the hardware based on this common compute pattern, we can not only accelerate tensor factorizations but also mixed sparse-dense matrix operations. We show significant speedup and energy benefit over the state-of-the-art CPU and GPU implementations of tensor factorizations and over CPU, GPU and accelerators for matrix operations.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
The polyhedral model is more widely applicable than you think The polyhedral model is a powerful framework for automatic optimization and parallelization. It is based on an algebraic representation of programs, allowing to construct and search for complex sequences of optimizations. This model is now mature and reaches production compilers. The main limitation of the polyhedral model is known to be its restriction to statically predictable, loop-based program parts. This paper removes this limitation, allowing to operate on general data-dependent control-flow. We embed control and exit predicates as first-class citizens of the algebraic representation, from program analysis to code generation. Complementing previous (partial) attempts in this direction, our work concentrates on extending the code generation step and does not compromise the expressiveness of the model. We present experimental evidence that our extension is relevant for program optimization and parallelization, showing performance improvements on benchmarks that were thought to be out of reach of the polyhedral model.
Sparse matrix multiplication: The distributed block-compressed sparse row library. •An implementation of a sparse matrix–matrix multiplication library is described.•The library was developed to support linear-scaling quantum simulations.•Performance is high for a variety of matrix sparsities.•We show that the library scales to tens of thousands of processor cores.
AccPar: Tensor Partitioning for Heterogeneous Deep Learning Accelerators Deep neural network (DNN) accelerators as an example of domain-specific architecture have demonstrated great success in DNN inference. However, the architecture acceleration for equally important DNN training has not yet been fully studied. With data forward, error backward and gradient calculation, DNN training is a more complicated process with higher computation and communication intensity. Because the recent research demonstrates a diminishing specialization return, namely, “accelerator wall”, we believe that a promising approach is to explore coarse-grained parallelism among multiple performance-bounded accelerators to support DNN training. Distributing computations on multiple heterogeneous accelerators to achieve high throughput and balanced execution, however, remaining challenging. We present ACCPAR, a principled and systematic method of determining the tensor partition among heterogeneous accelerator arrays. Compared to prior empirical or unsystematic methods, ACCPAR considers the complete tensor partition space and can reveal previously unknown new parallelism configurations. ACCPAR optimizes the performance based on a cost model that takes into account both computation and communication costs of a heterogeneous execution environment. Hence, our method can avoid the drawbacks of existing approaches that use communication as a proxy of the performance. The enhanced flexibility of tensor partitioning in ACCPAR allows the flexible ratio of computations to be distributed among accelerators with different performances. The proposed search algorithm is also applicable to the emerging multi-path patterns in modern DNNs such as ResNet. We simulate ACCPAR on a heterogeneous accelerator array composed of both TPU-v2 and TPU-v3 accelerators for the training of large-scale DNN models such as Alexnet, Vgg series and Resnet series. The average performance improvements of the state-of-the-art “one weird trick” (OWT) and HYPAR, and ACCPAR, normalized to the baseline data parallelism scheme where each accelerator replicates the model and processes different input data in parallel, are 2.98×, 3.78×, and 6.30×, respectively.
Hardware acceleration of database operations As the amount of memory in database systems grows, entire database tables, or even databases, are able to fit in the system's memory, making in-memory database operations more prevalent. This shift from disk-based to in-memory database systems has contributed to a move from row-wise to columnar data storage. Furthermore, common database workloads have grown beyond online transaction processing (OLTP) to include online analytical processing and data mining. These workloads analyze huge datasets that are often irregular and not indexed, making traditional database operations like joins much more expensive. In this paper we explore using dedicated hardware to accelerate in-memory database operations. We present hardware to accelerate the selection process of compacting a single column into a linear column of selected data, joining two sorted columns via merging, and sorting a column. Finally, we put these primitives together to accelerate an entire join operation. We implement a prototype of this system using FPGAs and show substantial improvements in both absolute throughput and utilization of memory bandwidth. Using the prototype as a guide, we explore how the hardware resources required by our design change with the desired throughput.
Capstan: A Vector RDA for Sparsity ABSTRACT This paper proposes Capstan: a scalable, parallel-patterns-based, reconfigurable dataflow accelerator (RDA) for sparse and dense tensor applications. Instead of designing for one application, we start with common sparse data formats, each of which supports multiple applications. Using a declarative programming model, Capstan supports application-independent sparse iteration and memory primitives that can be mapped to vectorized, high-performance hardware. We optimize random-access sparse memories with configurable out-of-order execution to increase SRAM random-access throughput from 32% to 80%. For a variety of sparse applications, Capstan with DDR4 memory is 18× faster than a multi-core CPU baseline, while Capstan with HBM2 memory is 16× faster than an Nvidia V100 GPU. For sparse applications that can be mapped to Plasticine, a recent dense RDA, Capstan is 7.6× to 365× faster and only 16% larger.
PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning Convolution neural networks (CNNs) are the heart of deep learning applications. Recent works PRIME [1] and ISAAC [2] demonstrated the promise of using resistive random access memory (ReRAM) to perform neural computations in memory. We found that training cannot be efficiently supported with the current schemes. First, they do not consider weight update and complex data dependency in training procedure. Second, ISAAC attempts to increase system throughput with a very deep pipeline. It is only beneficial when a large number of consecutive images can be fed into the architecture. In training, the notion of batch (e.g. 64) limits the number of images can be processed consecutively, because the images in the next batch need to be processed based on the updated weights. Third, the deep pipeline in ISAAC is vulnerable to pipeline bubbles and execution stall. In this paper, we present PipeLayer, a ReRAM-based PIM accelerator for CNNs that support both training and testing. We analyze data dependency and weight update in training algorithms and propose efficient pipeline to exploit inter-layer parallelism. To exploit intra-layer parallelism, we propose highly parallel design based on the notion of parallelism granularity and weight replication. With these design choices, PipeLayer enables the highly pipelined execution of both training and testing, without introducing the potential stalls in previous work. The experiment results show that, PipeLayer achieves the speedups of 42.45x compared with GPU platform on average. The average energy saving of PipeLayer compared with GPU implementation is 7.17x.
Fully integrated wideband high-current rectifiers for inductively powered devices This paper describes the design and implementation of fully integrated rectifiers in BiCMOS and standard CMOS technologies for rectifying an externally generated RF carrier signal in inductively powered wireless devices, such as biomedical implants, radio-frequency identification (RFID) tags, and smartcards to generate an on-chip dc supply. Various full-wave rectifier topologies and low-power circuit design techniques are employed to decrease substrate leakage current and parasitic components, reduce the possibility of latch-up, and improve power transmission efficiency and high-frequency performance of the rectifier block. These circuits are used in wireless neural stimulating microsystems, fabricated in two processes: the University of Michigan's 3-μm 1M/2P N-epi BiCMOS, and the AMI 1.5-μm 2M/2P N-well standard CMOS. The rectifier areas are 0.12-0.48 mm2 in the above processes and they are capable of delivering >25mW from a receiver coil to the implant circuitry. The performance of these integrated rectifiers has been tested and compared, using carrier signals in 0.1-10-MHz range.
Local and global properties in networks of processors (Extended Abstract) This paper attempts to get at some of the fundamental properties of distributed computing by means of the following question: “How much does each processor in a network of processors need to know about its own identity, the identities of other processors, and the underlying connection network in order for the network to be able to carry out useful functions?” The approach we take is to require that the processors be designed without any knowledge (or only very broad knowledge) of the networks they are to be used in, and furthermore, that all processors with the same number of communication ports be identical. Given a particular network function, e.g., setting up a spanning tree, we ask whether processors may be designed so that when they are embedded in any connected network and started in some initial configuration, they are guaranteed to accomplish the desired function.
The evolution of hardware platforms for mobile 'software defined radio' terminals. The deployment of communication systems mainly depends on the availability of appropriate microelectronics. Therefore, the Fraunhofer-Institut fur Mikroelektronische Schaltungen und Systeme (IMS) considers the combined approach to communication and microelectronic system design as crucial. This paper explores the impact of anticipated communication services for future wireless communication systems on the evolution of microelectronics for wireless terminals. A roadmap is presented which predicts the hardware/software split of future software defined radio terminals (SDR terminals). Additionally, a new philosophy for analog and digital codesign is introduced, which may help to accelerate the appearance of mobile software defined radio terminals.
Communication-efficient failure detection and consensus in omission environments Failure detectors have been shown to be a very useful mechanism to solve the consensus problem in the crash failure model, for which a number of communication-efficient algorithms have been proposed. In this paper we deal with the definition, implementation and use of communication-efficient failure detectors in the general omission failure model, where processes can fail by crashing and by omitting messages when sending and/or receiving. We first define a new failure detector class for this model in terms of completeness and accuracy properties. Then we propose an algorithm that implements a failure detector of the proposed class in a communication-efficient way, in the sense that only a linear number of links are used to send messages forever. We also explain how the well-known consensus algorithm of Chandra and Toueg can be adapted in order to use the proposed failure detector.
A Minimally Invasive 64-Channel Wireless μECoG Implant Emerging applications in brain-machine interface systems require high-resolution, chronic multisite cortical recordings, which cannot be obtained with existing technologies due to high power consumption, high invasiveness, or inability to transmit data wirelessly. In this paper, we describe a microsystem based on electrocorticography (ECoG) that overcomes these difficulties, enabling chronic recording and wireless transmission of neural signals from the surface of the cerebral cortex. The device is comprised of a highly flexible, high-density, polymer-based 64-channel electrode array and a flexible antenna, bonded to 2.4 mm × 2.4 mm CMOS integrated circuit (IC) that performs 64-channel acquisition, wireless power and data transmission. The IC digitizes the signal from each electrode at 1 kS/s with 1.2 μV input referred noise, and transmits the serialized data using a 1 Mb/s backscattering modulator. A dual-mode power-receiving rectifier reduces data-dependent supply ripple, enabling the integration of small decoupling capacitors on chip and eliminating the need for external components. Design techniques in the wireless and baseband circuits result in over 16× reduction in die area with a simultaneous 3× improvement in power efficiency over the state of the art. The IC consumes 225 μW and can be powered by an external reader transmitting 12 mW at 300 MHz, which is over 3× lower than IEEE and FCC regulations.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.1
0.1
0.1
0.1
0.1
0.05
0.033333
0.006667
0
0
0
0
0
0
Multiple Event Time-to-Digital Conversion-Based Pulse Digitization for a 250 MHz Pulse Radio Ranging Application A pulse digitizing approach for time-of-arrival pulse radio based ranging is introduced. It is based on a bank of time-to-digital converter (TDC) cores. A comparator bank triggers these multiple TDCs. This multiple event approach has advantages over classic single TDC solutions when facing unknown channel gains, noise corruption, and strong fading channel behavior. Pulses are digitized in a way that is superior in terms of performance versus power to classic A/D conversion. A power effort figure ξ and a new SNDR metric are introduced, easing performance comparison of pulse digitizers. A low power 8 channel digitizing system with a resolution of δtring=62.5 ps is presented for a cm accurate ranging application. The asynchronous, event-based nature of the architecture requires nonstrobed comparators to fire value crossing events. A dynamic range of 800:1 is realized. The digitization device is designed for 130 nm standard CMOS. An analog-baseband front-end I-Q energy detection and comparator threshold level configuration D/As are added to the design. The complete system is designed to consume 4 mW.
A Memristor-Based Continuous-Time Digital FIR Filter for Biomedical Signal Processing This paper proposes a new timing storage circuit based on memristors. Its ability to store and reproduce timing information in an analog manner without performing quantization can be useful for a wide range of applications. For continuous-time (CT) digital filters, the power and area costly analog delay blocks, which are usually implemented as inverter chains or their variants, can be replaced by the proposed timing storage circuits to delay CT digital signals in a more efficient way, especially for low-frequency biomedical applications that require very long tap delays. In addition, the same timing storage circuits also enable the storage of CT digital signals, extending the benefits of CT digital signal processing (DSP) to applications that require signal storage. As an example, a 15-tap CT finite impulse response (FIR) Savitzky-Golay (S-G) filter was designed with memristor-based delay blocks to smoothen electrocardiographic (ECG) signals accompanied with high-frequency noise. The simulated power consumption under a 3.3-volt supply was 6.63 .
An ECG recording front-end with continuous-time level-crossing sampling. An ECG recording front-end with a continuous- time asynchronous level-crossing analog-to-digital converter (LC-ADC) is proposed. The system is a voltage and current mixed-mode system, which comprises a low noise amplifier (LNA), a programmable voltage-to-current converter (PVCC) as a programmable gain amplifier (PGA) and an LC-ADC with calibration DACs and an RC oscillator. The LNA shows an input referred noise of 3.77 μVrms over 0.06 Hz-950 Hz bandwidth. The total harmonic distortion (THD) of the LNA is 0.15% for a 10 mVPP input. The ECG front-end consumes 8.49 μW from a 1 V supply and achieves an ENOB up to 8 bits. The core area of the proposed front-end is 690 ×710 μm2, fabricated in a 0.18 μm CMOS technology.
The Virtual Trackpad: An Electromyography-Based, Wireless, Real-Time, Low-Power, Embedded Hand-Gesture-Recognition System Using an Event-Driven Artificial Neural Network. This brief presents a wireless, low-power embedded system that recognizes hand gestures by decoding surface electromyography (EMG) signals. Ten hand gestures used on commercial trackpads, including pinch, stretch, swipe left, swipe right, scroll up, scroll down, single click, double click, pat, and ok, can be recognized in real time. Features from four differential EMG channels are extracted in mu...
A Compact, Low-Power Analog Front-End With Event-Driven Input Biasing for High-Density Neural Recording in 22-nm FDSOI An ultra-small-area, low-power analog front-end (AFE) for high-density neural recording is presented in this brief. It features an 11-bit incremental delta-sigma analog-to-digital converter ( $\Delta \Sigma $ ADC) enhanced with an offset-rejecting event-driven input biasing network. This network avoids saturation of the ADC in...
A 1-to-1-kHz, 4.2-to-544-nW, Multi-Level Comparator Based Level-Crossing ADC for IoT Applications. This brief presents the design of an ultra-low power level-crossing analog-to-digital converter (LC-ADC) for IoT and biomedical applications. The proposed LC-ADC utilizes only one multi-level comparator instead of multiple comparators as in conventional LC-ADC, leading to simplified implementation and significant reduction in power. Implemented in 0.18-μm CMOS process, the LC-ADC achieves 7.9 equi...
A 13.34μW Event-driven Patient-specific ANN Cardiac Arrhythmia Classifier for Wearable ECG Sensors. Artificial neural network (ANN) and its variants are favored algorithm in designing cardiac arrhythmia classifier (CAC) for its high accuracy. However, the implementation of ultralow power ANN-CAC is challenging due to the intensive computations. Moreover, the imbalanced MIT-BIH database limits the ANN-CAC performance. Several novel techniques are proposed to address the challenges in the low power implementation. Firstly, continuous-in-time discrete-in-amplitude (CTDA) signal flow is adopted to reduce the multiplication operations. Secondly, conditional grouping scheme (CGS) in combination with biased training (BT) is proposed to handle the imbalanced training samples for better training convergency and evaluation accuracy. Thirdly, arithmetic unit sharing with customized high-performance multiplier improves the power efficiency. Verified in FPGA and synthesized in 0.18 μm CMOS process, the proposed CTDA ANN-CAC can classify an arrhythmia within 252 μs at 25 MHz clock frequency with average power of 13.34 μW for 75bpm heart rate. Evaluated on MIT-BIH database, it shows over 98% classification accuracy, 97% sensitivity, and 94% positive predictivity.
A new approach to state observation of nonlinear systems with delayed output The article presents a new approach for the construction of a state observer for nonlinear systems when the output measurements are available for computations after a nonnegligible time delay. The proposed observer consists of a chain of observation algorithms reconstructing the system state at different delayed time instants (chain observer). Conditions are given for ensuring global exponential convergence to zero of the observation error for any given delay in the measurements. The implementation of the observer is simple and computer simulations demonstrate its effectiveness.
Distributed Subgradient Methods For Multi-Agent Optimization We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his/her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.
Filtering by Aliasing In this manuscript we describe a fundamentally novel approach to the design of anti-aliasing filters. The approach, termed Filtering by Aliasing, incorporates the frequency-domain aliasing operation itself into the filtering task. The spectral content is spread with a periodic mixer and weighted with a simple analog filter before it aliases at the sampler. By designing the system according to the formulations presented in this manuscript, the sampled output will have been subjected to sharp, highly programmable anti-alias filtering. This manuscript describes the proposed Filtering by Aliasing idea, the effective programmable anti-aliasing filter, its design, and its range of frequency responses. The manuscript also addresses the implementation sensitivities of the proposed Filtering by Aliasing approach and provides a performance comparison against existing techniques in the context of reconfigurable anti-alias filtering.
A Digital Requantizer With Shaped Requantization Noise That Remains Well Behaved After Nonlinear Distortion A major problem in oversampling digital-to-analog converters and fractional-N frequency synthesizers, which are ubiquitous in modern communication systems, is that the noise they introduce contains spurious tones. The spurious tones are the result of digitally generated, quantized signals passing through nonlinear analog components. This paper presents a new method of digital requantization called successive requantization, special cases of which avoids the spurious tone generation problem. Sufficient conditions are derived that ensure certain statistical properties of the quantization noise, including the absence of spurious tones after nonlinear distortion. A practical example is presented and shown to satisfy these conditions.
Extermal cover times for random walks on trees
CCFI: Cryptographically Enforced Control Flow Integrity Control flow integrity (CFI) restricts jumps and branches within a program to prevent attackers from executing arbitrary code in vulnerable programs. However, traditional CFI still offers attackers too much freedom to chose between valid jump targets, as seen in recent attacks. We present a new approach to CFI based on cryptographic message authentication codes (MACs). Our approach, called cryptographic CFI (CCFI), uses MACs to protect control flow elements such as return addresses, function pointers, and vtable pointers. Through dynamic checks, CCFI enables much finer-grained classification of sensitive pointers than previous approaches, thwarting all known attacks and resisting even attackers with arbitrary access to program memory. We implemented CCFI in Clang/LLVM, taking advantage of recently available cryptographic CPU instructions (AES-NI). We evaluate our system on several large software packages (including nginx, Apache and memcache) as well as all their dependencies. The cost of protection ranges from a 3--18% decrease in server request rate. We also expect this overhead to shrink as Intel improves the performance AES-NI.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.2
0.2
0.2
0.2
0.2
0.1
0.04
0
0
0
0
0
0
0
Opportunistic Information Dissemination in Mobile Ad-hoc Networks: The Profit of Global Synchrony The topic of this paper is the study of Information Dissemination in Mobile Ad-hoc Networks by means of deterministic protocols. We characterize the connectivity resulting from the movement, from failures and from the fact that nodes may join the computation at different times with two values, � and �, so that, withintime slots, some node that has the information must be connected to some node without it for at leasttime slots. The protocols studied are clas- sified into three classes: oblivious (the transmission schedule of a node is only a function of its ID), quasi-oblivious (the transmission schedule may also depend on a global time), and adaptive. The main contribution of this work concerns negative results. Contrasting the lower and upper bounds derived, interesting complexity gaps among protocol- classes are observed. More precisely, in order to guarantee any progress towards solving the problem, it is shown thatmust be at least n 1 in general, but that � 2 (n 2 /log n) if an oblivious protocol is used. Since quasi-oblivious protocols can guarantee progress with � 2 O(n), this represents a significant gap, almost linear in �, between oblivious and quasi-oblivious protocols. Regarding the time to complete the dissemination, a lower bound of (n� + n 3 /log n) is proved for oblivious protocols, which is tight up to a polylogarithmic factor because a constructive O(n� + n 3 log n) upper bound exists for the same class. It is also proved that adaptive protocols require (n� + n 2 ), which is optimal given that a matching upper bound can be proved for quasi-oblivious protocols. These results show that the gap in time complexity between oblivious and quasi- oblivious, and hence adaptive, protocols is almost linear. This gap is what we call the profit of global synchrony, since it represents the gain the network obtains from global synchrony with respect to not having it.
Non Trivial Computations in Anonymous Dynamic Networks. In this paper we consider a static set of anonymous processes, i.e., they do not have distinguished IDs, that communicate with neighbors using a local broadcast primitive. The communication graph changes at each computational round with the restriction of being always connected, i.e., the network topology guarantees 1-interval connectivity. In such setting non trivial computations, i.e., answering to a predicate like there exists at least one process with initial input a?, are impossible. In a recent work, it has been conjectured that the impossibility holds even if a distinguished leader process is available within the computation. In this paper we prove that the conjecture is false. We show this result by implementing a deterministic leader-based terminating counting algorithm. In order to build our counting algorithm we first develop a counting technique that is time optimal on a family of dynamic graphs where each process has a fixed distance h from the leader and such distance does not change along rounds. Using this technique we build an algorithm that counts in anonymous 1-interval connected networks.
Opportunistic information dissemination in mobile ad-hoc networks: adaptiveness vs. obliviousness and randomization vs. determinism In this paper the problem of information dissemination in Mobile Ad-hoc Networks (MANETs) is studied. The problem is to disseminate a piece of information, initially held by a distinguished source node, to all nodes in a target set. We assume a weak set of restrictions on the mobility of nodes, parameterized by α, the disconnection time, and β, the link stability time, such that the MANETs considered are connected enough for dissemination. Such a connectivity model generalizes previous models in that we assume much less connectivity, or make explicit the assumptions in previous papers. In MANETs, nodes are embedded in the plane and can move with bounded speed. Communication between nodes occurs over a collision-prone single channel. We show upper and lower bounds for different types of randomized protocols, parameterized by α and β. This problem has been extensively studied in static networks and for deterministic protocols. We show tight bounds on the randomized complexity of information dissemination in MANETs, for reasonable choices of α and β. We show that randomization reduces the time complexity of the problem by a logarithmic or linear factor, depending on the class of randomized protocol considered.
Counting in Practical Anonymous Dynamic Networks is Polynomial. Anonymous Dynamic Networks is a harsh computational environment due to changing topology and lack of identifiers. Topology changes are well motivated by mobility and unreliable communication environments of present networks. With respect to node identifiers, in future massive networks it may be necessary or at least convenient to avoid them to facilitate mass production.
Reliable broadcast in mobile multihop packet networks
An early-stopping protocol for computing aggregate functions in Sensor Networks In this paper, we study algebraic aggregate computations in Sensor Networks. The main contribution is the presentation of an early-stopping protocol that computes the average function under a harsh model of the conditions under which sensor nodes operate. This protocol is shown to be time-optimal in the presence of infrequent failures. The approach followed saves time and energy by the computation relying on a small network of delegate nodes that can be rebuilt fast in case of node failures and communicate using a collision-free schedule. Delegate nodes run two protocols simultaneously, namely, a collection/dissemination tree-based algorithm, which is shown to be optimal, and a mass-distribution algorithm. Both algorithms are analyzed under a model where the frequency of failures is a parameter. Other aggregate computation algorithms can be easily derived from this protocol. To the best of our knowledge, this is the first optimal early-stopping algorithm for aggregate computations in Sensor Networks.
Gossip-Based Computation of Aggregate Information Over the last decade, we have seen a revolution in connectivity between computers, and a resulting paradigm shift from centralized to highly distributed systems. With massive scale also comes massive instability, as node and link failures become the norm rather than the exception. For such highly volatile systems, decentralized gossip-based protocols are emerging as an approach to maintaining simplicity and scalability while achieving fault-tolerant information dissemination.In this paper, we study the problem of computing aggregates with gossip-style protocols. Our first contribution is an analysis of simple gossip-based protocols for the computations of sums, averages, random samples, quantiles, and other aggregate functions, and we show that our protocols converge exponentially fast to the true answer when using uniform gossip.Our second contribution is the definition of a precise notion of the speed with which a node's data diffuses through the network. We show that this diffusion speed is at the heart of the approximation guarantees for all of the above problems. We analyze the diffusion speed of uniform gossip in the presence of node and link failures, as well as for flooding-based mechanisms. The latter expose interesting connections to random walks on graphs.
Communication-efficient leader election in crash-recovery systems Abstract: This work addresses the leader election problem in partially synchronous distributed systems where processes can crash and recover. More precisely, it focuses on implementing the Omega failure detector class, which provides a leader election functionality, in the crash-recovery failure model. The concepts of communication efficiency and near-efficiency for an algorithm implementing Omega are defined. Depending on the use or not of stable storage, the property satisfied by unstable processes, i.e., those that crash and recover infinitely often, varies. Two algorithms implementing Omega are presented. In the first algorithm, which is communication-efficient and uses stable storage, eventually and permanently unstable processes agree on the leader with correct processes. In the second algorithm, which is near-communication-efficient and does not use stable storage, processes start their execution with no leader in order to avoid the disagreement among unstable processes, that will agree on the leader with correct processes after receiving a first message from the leader.
Building efficient wireless sensor networks with low-level naming In most distributed systems, naming of nodes for low-level communication leverages topological location (such as node addresses) and is independent of any application. In this paper, we investigate an emerging class of distributed systems where low-level communication does not rely on network topological location. Rather, low-level communication is based on attributes that are external to the network topology and relevant to the application. When combined with dense deployment of nodes, this kind of named data enables in-network processing for data aggregation, collaborative signal processing, and similar problems. These approaches are essential for emerging applications such as sensor networks where resources such as bandwidth and energy are limited. This paper is the first description of the software architecture that supports named data and in-network processing in an operational, multi-application sensor-network. We show that approaches such as in-network aggregation and nested queries can significantly affect network traffic. In one experiment aggregation reduces traffic by up to 42% and nested queries reduce loss rates by 30%. Although aggregation has been previously studied in simulation, this paper demonstrates nested queries as another form of in-network processing, and it presents the first evaluation of these approaches over an operational testbed.
Practical delegation of computation using multiple servers The current move to Cloud Computing raises the need for verifiable delegation of computations, where a weak client delegates his computation to a powerful server, while maintaining the ability to verify that the result is correct. Although there are prior solutions to this problem, none of them is yet both general and practical for real-world use. We demonstrate a relatively efficient and general solution where the client delegates the computation to several servers, and is guaranteed to determine the correct answer as long as even a single server is honest. We show: A protocol for any efficiently computable function, with logarithmically many rounds, based on any collision-resistant hash family. The protocol is set in terms of Turing Machines but can be adapted to other computation models. An adaptation of the protocol for the X86 computation model and a prototype implementation, called Quin, for Windows executables. We describe the architecture of Quin and experiment with several parameters on live clouds. We show that the protocol is practical, can work with nowadays clouds, and is efficient both for the servers and for the client.
Pinning a complex dynamical network to its equilibrium It is now known that the complexity of network topology has a great impact on the stabilization of complex dynamical networks. In this work, we study the control of random networks and scale-free networks. Conditions are investigated for globally or locally stabilizing such networks. Our strategy is to apply local feedback control to a small fraction of network nodes. We propose the concept of virtual control for microscopic dynamics throughout the process with different pinning schemes for both random networks and scale-free networks. We explain the main reason why significantly less local controllers are required by specifically pinning the most highly connected nodes in a scale-free network than those required by the randomly pinning scheme, and why there is no significant difference between specifically and randomly pinning schemes for controlling random dynamical networks. We also study the synchronization phenomenon of controlled dynamical networks in the stabilization process, both analytically and numerically.
Simulation knowledge extraction and reuse in constrained random processor verification This work proposes a methodology of knowledge extraction from constrained-random simulation data. Feature-based analysis is employed to extract rules describing the unique properties of novel assembly programs hitting special conditions. The knowledge learned can be reused to guide constrained-random test generation towards uncovered corners. The experiments are conducted based on the verification environment of a commercial processor design, in parallel with the on-going verification efforts. The experimental results show that by leveraging the knowledge extracted from constrained-random simulation, we can improve the test templates to activate the assertions that otherwise are difficult to activate by extensive simulation.
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.060481
0.06
0.05
0.05
0.018764
0.011251
0.004961
0.000269
0.000006
0
0
0
0
0
Constant Power Loads and Negative Impedance Instability in Automotive Systems: Definition, Modeling, Stability, and Control of Power Electronic Converters and Motor Drives Power electronic converters and electric motor drives are being put into use at an increasingly rapid rate in advanced automobiles. However, the new advanced automotive electrical systems employ multivoltage level hybrid ac and dc as well as electromechanical systems that have unique characteristics, dynamics, and stability problems that are not well understood due to the nonlinearity and time dep...
Performance Evaluation of an EDA-Based Large-Scale Plug-In Hybrid Electric Vehicle Charging Algorithm The anticipation of a large penetration of plug-in hybrid electric vehicles (PHEVs) into the market brings up many technical problems that need to be addressed. In the near future, a large number of PHEVs in our society will add a large-scale energy load to our power grids, as well as add substantial energy resources that can be utilized. An emerging issue is that a large number of PHEVs simultaneously connected to the grid may pose a huge threat to the overall power system quality and stability. In this paper, the authors propose an algorithm for optimally managing a large number of PHEVs (e.g., 3000) charging at a municipal parking station. The authors used the estimation of distribution algorithm (EDA) to intelligently allocate electrical energy to the PHEVs connected to the grid. A mathematical framework for the objective function (i.e., maximizing the average state-of-charge at the next time step) is also given. The authors considered real-world constraints such as energy price, remaining battery capacity, and remaining charging time. The authors also simulated the real-world parking deck scenarios according to the statistical analysis based on the transportation data. The authors characterized the performance of EDA using a Matlab simulation, and compared it with other optimization techniques.
The Evolution of Plug-In Electric Vehicle-Grid Interactions Over the past decade key technologies have progressed so that mass-market viable plug-in electric vehicles (PEVs) are now set to reach the first of many major vehicle markets by 2011. PEV-grid interactions comprise a mix of industries that have not interacted closely in the past. A number of these commercial participants have utilized the same basic business model for nearly a century. The various participants include vehicle manufacturers, utilities, and supplier firms who have radically different business models, regulatory and legal environments, geographical scope, and technical capabilities. This paper will provide a survey of PEV technology trends and other factors. From an analysis of these factors this paper synthesizes and provides a likely scenario for PEV-grid interaction over the next decade.
Comprehensive Topological Analysis of Conductive and Inductive Charging Solutions for Plug-In Electric Vehicles. The impending global energy crisis has opened up new opportunities for the automotive industry to meet the ever-increasing demand for cleaner and fuel-efficient vehicles. This has necessitated the development of drivetrains that are either fully or partially electrified in the form of electric and plug-in hybrid electric vehicles (EVs and HEVs), respectively, which are collectively addressed as plug-in EVs (PEVs). PEVs in general are equipped with larger on-board storage and power electronics for charging or discharging the battery, in comparison with HEVs. The extent to which PEVs are adopted significantly depends on the nature of the charging solution utilized. In this paper, a comprehensive topological survey of the currently available PEV charging solutions is presented. PEV chargers based on the nature of charging (conductive or inductive), stages of conversion (integrated single stage or two stages), power level (level 1, 2, or 3), and type of semiconductor devices utilized (silicon, silicon carbide, or gallium nitride) are thoroughly reviewed in this paper.
Development of an Optimal Vehicle-to-Grid Aggregator for Frequency Regulation For vehicle-to-grid (V2G) frequency regulation services, we propose an aggregator that makes efficient use of the distributed power of electric vehicles to produce the desired grid-scale power. The cost arising from the battery charging and the revenue obtained by providing the regulation are investigated and represented mathematically. Some design considerations of the aggregator are also discussed together with practical constraints such as the energy restriction of the batteries. The cost function with constraints enables us to construct an optimization problem. Based on the developed optimization problem, we apply the dynamic programming algorithm to compute the optimal charging control for each vehicle. Finally, simulations are provided to illustrate the optimality of the proposed charging control strategy with variations of parameters.
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Computing size-independent matrix problems on systolic array processors A methodology to transform dense to band matrices is presented in this paper. This transformation, is accomplished by triangular blocks partitioning, and allows the implementation of solutions to problems with any given size, by means of contraflow systolic arrays, originally proposed by H.T. Kung. Matrix-vector and matrix-matrix multiplications are the operations considered here.The proposed transformations allow the optimal utilization of processing elements (PEs) of the systolic array when dense matrix are operated. Every computation is made inside the array by using adequate feedback. The feedback delay time depends only on the systolic array size.
A 12 bit 2.9 GS/s DAC With IM3 $ ≪ -$ 60 dBc Beyond 1 GHz in 65 nm CMOS A 12 bit 2.9 GS/s current-steering DAC implemented in 65 nm CMOS is presented, with an IM3 < ¿-60 dBc beyond 1 GHz while driving a 50 ¿ load with an output swing of 2.5 Vppd and dissipating a power of 188 mW. The SFDR measured at 2.9  GS/s is better than 60 dB beyond 340 MHz while the SFDR measured at 1.6 GS/s is better than 60 dB beyond 440 MHz. The increase in performance at high-frequencies, co...
The M-Machine multicomputer The M-Machine is an experimental multicomputer being developed to test architectural concepts motivated by the constraints of modern semiconductor technology and the demands of programming systems. The M-Machine computing nodes are connected with a 3-D mesh network; each node is a multithreaded processor incorporating 9 function units, on-chip cache, and local memory. The multiple function units are used to exploit both instruction-level and thread-level parallelism. A user accessible message passing system yields fast communication and synchronization between nodes. Rapid access to remote memory is provided transparently to the user with a combination of hardware and software mechanisms. This paper presents the architecture of the M-Machine and describes how its mechanisms attempt to maximize both single thread performance and overall system throughput. The architecture is complete and the MAP chip, which will serve as the M-Machine processing node, is currently being implemented.
SPONGENT: a lightweight hash function This paper proposes spongent - a family of lightweight hash functions with hash sizes of 88 (for preimage resistance only), 128, 160, 224, and 256 bits based on a sponge construction instantiated with a present-type permutation, following the hermetic sponge strategy. Its smallest implementations in ASIC require 738, 1060, 1329, 1728, and 1950 GE, respectively. To our best knowledge, at all security levels attained, it is the hash function with the smallest footprint in hardware published so far, the parameter being highly technology dependent. spongent offers a lot of flexibility in terms of serialization degree and speed. We explore some of its numerous implementation trade-offs. We furthermore present a security analysis of spongent. Basing the design on a present-type primitive provides confidence in its security with respect to the most important attacks. Several dedicated attack approaches are also investigated.
Noise Analysis and Simulation Method for a Single-Slope ADC With CDS in a CMOS Image Sensor Many mixed-signal circuits are nonlinear time-varying systems whose noise estimation cannot be obtained from the conventional frequency domain noise simulation (FNS). Although the transient noise simulation (TNS) supported by a commercial simulator takes into account nonlinear time-varying characteristics of the circuit, its simulation time is unacceptably long to obtain meaningful noise estimatio...
Design of ultra-wide-load, high-efficient DC-DC buck converters The paper presents the design of a current-mode control DC-DC buck converter with pulse-width modulation (PWM) mode. The converter achieves a current load ranged from 50 mA to 500 mA over 90% efficiency, and the maximum power efficiency is 95.6%, where the circuit was simulated with the TSMC 0.35 um CMOS process. In order to achieve ultra-wide-load high efficiency, this paper implements with two PMOS transistors as switches. Results show that the converter achieves above 90% efficiency at the range from 30 mA to 1200 mA with a maximum efficiency of 96.36%. Results show that, with the additional switch transistor, the current load range is expanded more than double. With two PMOS transistors, the proposed converter can also achieve 3 different load ranges so that it can be programmed for the applications which are operated at those three different load ranges.
A 0.5-V 2.5-GHz high-gain low-power regenerative amplifier based on Colpitts oscillator topology in 65-nm CMOS This paper proposes the regenerative amplifier based on the Colpitts oscillator topology. The positive feedback amount was optimized analytically in the circuit design. The proposed regenerative amplifier was fabricated in 65 nm CMOS technology. The measurement results showed 28.7 dB gain and 6.4 dB noise figure at 2.55 GHz while consuming 120 μW under the 0.5-V power supply.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1.05856
0.044
0.044
0.044
0.0132
0
0
0
0
0
0
0
0
0
Fog computing for the internet of things: a survey Research in the Internet of Things (IoT) conceives a world where everyday objects are connected to the Internet and exchange, store, process, and collect data from the surrounding environment. IoT devices are becoming essential for supporting the delivery of data to enable electronic services, but they are not sufficient in most cases to host application services directly due to their intrinsic resource constraints. Fog Computing (FC) can be a suitable paradigm to overcome these limitations, as it can coexist and cooperate with centralized Cloud systems and extends the latter toward the network edge. In this way, it is possible to distribute resources and services of computing, storage, and networking along the Cloud-to-Things continuum. As such, FC brings all the benefits of Cloud Computing (CC) closer to end (user) devices. This article presents a survey on the employment of FC to support IoT devices and services. The principles and literature characterizing FC are described, highlighting six IoT application domains that may benefit from the use of this paradigm. The extension of Cloud systems towards the network edge also creates new challenges and can have an impact on existing approaches employed in Cloud-based deployments. Research directions being adopted by the community are highlighted, with an indication of which of these are likely to have the greatest impact. An overview of existing FC software and hardware platforms for the IoT is also provided, along with the standardisation efforts in this area initiated by the OpenFog Consortium (OFC).
State Machine Replication for the Masses with BFT-SMART The last fifteen years have seen an impressive amount of work on protocols for Byzantine fault-tolerant (BFT) state machine replication (SMR). However, there is still a need for practical and reliable software libraries implementing this technique. BFT-SMART is an open-source Java-based library implementing robust BFT state machine replication. Some of the key features of this library that distinguishes it from similar works (e.g., PBFT and UpRight) are improved reliability, modularity as a first-class property, multicore-awareness, reconfiguration support and a flexible programming interface. When compared to other SMR libraries, BFT-SMART achieves better performance and is able to withstand a number of real-world faults that previous implementations cannot.
Demystifying Fog Computing: Characterizing Architectures, Applications and Abstractions Internet of Things (IoT) has accelerated the deployment of millions of sensors at the edge of the network, through Smart City infrastructure and lifestyle devices. Cloud computing platforms are often tasked with handling these large volumes and fast streams of data from the edge. Recently, Fog computing has emerged as a concept for low-latency and resource-rich processing of these observation streams, to complement Edge and Cloud computing. In this paper, we review various dimensions of system architecture, application characteristics and platform abstractions that are manifest in this Edge, Fog and Cloud eco-system. We highlight novel capabilities of the Edge and Fog layers, such as physical and application mobility, privacy sensitivity, and a nascent runtime environment. IoT application case studies based on first-hand experiences across diverse domains drive this categorization. We also highlight the gap between the potential and the reality of Fog computing, and identify challenges that need to be overcome for the solution to be sustainable. Taken together, our article can help platform and application developers bridge the gap that remains in making Fog computing viable.
Peer-to-Peer Bidirectional Streaming Using Mobile Edge Computing P2P streaming services that deliver content between peers without using a delivery server are popular. Since P2P streaming does not have a delivery server, it is advantageous that cost reduction and load does not concentrate on a peer, but it is common that peers with short round trip time (RTT) connect with each other and delivery contents. Therefore, there is a problem that the number of hops increases and a delay occurs, and there is a problem of withdrawal tolerance that peers stop viewing and other peers cannot receive contents. In this study, we focus on bidirectional streaming where these problems are noticeable, and propose bidirectional streaming aiming at reducing the number of hops and withdrawal tolerance using edge computing. Furthermore, in this research, we verify the usefulness of the proposed system by simulation and clarify that we can realize reduction of the number of hops and improvement of withdrawal tolerance compared with conventional P2P distribution system.
A proposal of a distributed access control over Fog computing: The ITS use case Internet of Things (IoT) raises many security challenges in relation with the different applications that can be deployed over these environments. IoT access control systems must respond to the new IoT requirements such as scalability, dynamicity, real-time interaction and resources constraint. The goal of this paper is to propose an approach based on Fog and Distributed Hash Table (DHT) toward access control for the Internet of Things. To evaluate the performances of our access solution, we used NS-3 and SUMO. The preliminary obtained results show acceptable overhead for the considered Intelligent Transport System (ITS) scenario.
Fog Computing: Helping the Internet of Things Realize Its Potential. The Internet of Things (IoT) could enable innovations that enhance the quality of life, but it generates unprecedented amounts of data that are difficult for traditional systems, the cloud, and even edge computing to handle. Fog computing is designed to overcome these limitations.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Leveraging on-chip voltage regulators as a countermeasure against side-channel attacks Side-channel attacks have become a significant threat to the integrated circuit security. Circuit level techniques are proposed in this paper as a countermeasure against side-channel attacks. A distributed on-chip power delivery system consisting of multi-level switched capacitor (SC) voltage converters is proposed where the individual interleaved stages are turned on and turned off either based on the workload information or pseudo-randomly to scramble the power consumption profile. In the case that the changes in the workload demand do not trigger the power delivery system to turn on or off individual stages, the active stages are reshuffled with so called converter-reshuffling to insert random spikes in the power consumption profile. An entropy based metric is developed to evaluate the security-performance of the proposed converter-reshuffling technique as compared to three other existing on-chip power delivery schemes. The increase in the power trace entropy with CoRe scheme is also demonstrated with simulation results to further verify the theoretical analysis.
The PARSEC benchmark suite: characterization and architectural implications This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
Linear Amplification with Nonlinear Components A technique for producing bandpass linear amplification with nonlinear components (LINC) is described. The bandpass signal first is separated into two constant envelope component signals. All of the amplitude and phase information of the original bandpass signal is contained in phase modulation on the component signals. These constant envelope signals can be amplified or translated in frequency by amplifiers or mixers which have nonlinear input-output amplitude transfer characteristics. Passive linear combining of the amplified and/or translated component signals produces an amplified and/or translated replica of the original signal.
Opportunistic Information Dissemination in Mobile Ad-hoc Networks: The Profit of Global Synchrony The topic of this paper is the study of Information Dissemination in Mobile Ad-hoc Networks by means of deterministic protocols. We characterize the connectivity resulting from the movement, from failures and from the fact that nodes may join the computation at different times with two values, � and �, so that, withintime slots, some node that has the information must be connected to some node without it for at leasttime slots. The protocols studied are clas- sified into three classes: oblivious (the transmission schedule of a node is only a function of its ID), quasi-oblivious (the transmission schedule may also depend on a global time), and adaptive. The main contribution of this work concerns negative results. Contrasting the lower and upper bounds derived, interesting complexity gaps among protocol- classes are observed. More precisely, in order to guarantee any progress towards solving the problem, it is shown thatmust be at least n 1 in general, but that � 2 (n 2 /log n) if an oblivious protocol is used. Since quasi-oblivious protocols can guarantee progress with � 2 O(n), this represents a significant gap, almost linear in �, between oblivious and quasi-oblivious protocols. Regarding the time to complete the dissemination, a lower bound of (n� + n 3 /log n) is proved for oblivious protocols, which is tight up to a polylogarithmic factor because a constructive O(n� + n 3 log n) upper bound exists for the same class. It is also proved that adaptive protocols require (n� + n 2 ), which is optimal given that a matching upper bound can be proved for quasi-oblivious protocols. These results show that the gap in time complexity between oblivious and quasi- oblivious, and hence adaptive, protocols is almost linear. This gap is what we call the profit of global synchrony, since it represents the gain the network obtains from global synchrony with respect to not having it.
Towards elastic SDR architectures using dynamic task management. SDR platforms integrating several types and numbers of processing elements in System-on-Chips become an attractive solution for baseband processing in wireless systems. In order to cope with the diversity of protocol applications and the heterogeneity of multi-core architectures, a hierarchical approach for workload distribution is proposed in this paper. Specifically, a system-level scheduler is employed to map applications to multiple processing clusters, complemented with a cluster-level scheduler - the CoreManager - for dynamic resource allocation and configuration as well as for task and data scheduling. A performance analysis of the proposed approach is presented, which shows the advantages of dynamic scheduling against a static approach for variable workloads in the LTE-Advanced uplink multi-user scenarios.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.2
0.2
0.2
0.2
0.2
0.04
0
0
0
0
0
0
0
0
A dynamic event-triggered approach to observer-based PID security control subject to deception attacks In this paper, the observer-based PID security control problem is investigated for a class of linear discrete-time systems subject to deception attacks. A new index for security level is proposed to account for the effect of the randomly occurring deception attack on the closed-loop system. A dynamic event-triggered mechanism, whose threshold parameter is dynamically adjusted according to a certain rule, is exploited to modulate the transmission of data packets with hope to effectively alleviate unnecessary energy consumption. Sufficient conditions for the existence of the expected observer-based PID controller are presented to ensure the input-to-state stability of the closed-loop system while achieving the prescribed security index. Gain matrices of the desired PID controller are parameterized in terms of the solutions to certain matrix inequalities that are readily solvable. Finally, a simulation example is given to verify the effectiveness and advantages of the developed controller design approach.
Controllability and Observability of a Well-Posed System Coupled With a Finite-Dimensional System We consider coupled systems consisting of a well-posed and strictly proper (hence regular) subsystem and a finite-dimensional subsystem connected in feedback. The external world interacts with the coupled system via the finite-dimensional part, which receives the external input and sends out the output. Under several assumptions, we derive well-posedness, regularity, exact (or approximate) controllability and exact (or approximate) observability results for such coupled systems.
Stabilization for a Coupled PDE-ODE Control System A control system of an ODE and a diffusion PDE is discussed in this paper. The novelty lies in that the system is coupled. The method of PDE backstepping as well as some special skills is resorted in stabilizing the coupled PDE–ODE control system, which is transformed into an exponentially stable PDE–ODE cascade with an invertible integral transformation. And a state feedback boundary controller is designed. Moreover, an exponentially convergent observer for anti-collocated setup is proposed, and the output feedback boundary control problem is solved. For both the state and output feedback boundary controllers, exponential stability analyses in the sense of the corresponding norms for the resulting closed-loop systems are given through rigid proofs.
Sampled-Data Fuzzy Control for Nonlinear Coupled Parabolic PDE-ODE Systems. In this paper, a sampled-data fuzzy control problem is addressed for a class of nonlinear coupled systems, which are described by a parabolic partial differential equation (PDE) and an ordinary differential equation (ODE). Initially, the nonlinear coupled system is accurately represented by the Takagi-Sugeno (T-S) fuzzy coupled parabolic PDE-ODE model. Then, based on the T-S fuzzy model, a novel t...
Sampled-Data Fuzzy Control With Guaranteed Cost for Nonlinear Parabolic PDE Systems via Static Output Feedback This article introduces a sampled-data (SD) static output feedback fuzzy control (FC) with guaranteed cost for nonlinear parabolic partial differential equation (PDE) systems. First, a Takagi–Sugeno (T–S) fuzzy parabolic PDE model is employed to represent the nonlinear PDE system. Second, with the aid of the T–S fuzzy PDE model, a SD FC design with guaranteed cost under spatially averaged measurements is developed in the formulation of linear matrix inequalities by utilizing a time-dependent Lyapunov functional and inequality techniques, which can stabilize exponentially the PDE system while providing an optimized upper bound on the cost function. The membership functions of the proposed controller are determined by the measurement output and independent of the fuzzy PDE plant model. Finally, simulation results are presented to control the diffusion equation and the FitzHugh–Nagumo equation for demonstrating the effectiveness of the proposed method.
A secure control framework for resource-limited adversaries. Cyber-secure networked control is modeled, analyzed, and experimentally illustrated in this paper. An attack space defined by the adversary’s model knowledge, disclosure, and disruption resources is introduced. Adversaries constrained by these resources are modeled for a networked control system architecture. It is shown that attack scenarios corresponding to denial-of-service, replay, zero-dynamics, and bias injection attacks on linear time-invariant systems can be analyzed using this framework. Furthermore, the attack policy for each scenario is described and the attack’s impact is characterized using the concept of safe sets. An experimental setup based on a quadruple-tank process controlled over a wireless network is used to illustrate the attack scenarios, their consequences, and potential counter-measures.
Stability Analysis of Positive Interval Type-2 TSK Systems With Application to Energy Markets Positive systems play an important role in many fields including biology, chemistry, and economics, among others. This paper discusses the stability of interval type-2 discrete-time positive Takagi-Sugeno-Kang (TSK) fuzzy systems. It discusses positive TSK systems and their nonzero equilibrium point. It then provides sufficient conditions for their exponential stability and instability. All the proposed stability and instability conditions can be tested using linear matrix inequalities. The stability and instability tests are demonstrated through application to a TSK model of the electric power market under a variety of market conditions.
Integrator backstepping control of a brush DC motor turning a robotic load In this paper, we design and implement integrator backstepping controllers (i.e., adaptive and robust) for a brush DC motor driving a one-link robot manipulator. Through the use of Lyapunov stability-type arguments, we show that both of these controllers ensure “good” load position tracking despite parametric uncertainty throughout the entire electromechanical system. Experimental results are presented to illustrate the performance and feasibility of implementing the nonlinear control algorithms
Fog computing and its role in the internet of things Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
Directed diffusion for wireless sensor networking Advances in processor, memory, and radio technology will enable small and cheap nodes capable of sensing, communication, and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed-diffusion paradigm for such coordination. Directed diffusion is data-centric in that all communication is for named data. All nodes in a directed-diffusion-based network are application aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network (e.g., data aggregation). We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network analytically and experimentally. Our evaluation indicates that directed diffusion can achieve significant energy savings and can outperform idealized traditional schemes (e.g., omniscient multicast) under the investigated scenarios.
The evolution of hardware platforms for mobile 'software defined radio' terminals. The deployment of communication systems mainly depends on the availability of appropriate microelectronics. Therefore, the Fraunhofer-Institut fur Mikroelektronische Schaltungen und Systeme (IMS) considers the combined approach to communication and microelectronic system design as crucial. This paper explores the impact of anticipated communication services for future wireless communication systems on the evolution of microelectronics for wireless terminals. A roadmap is presented which predicts the hardware/software split of future software defined radio terminals (SDR terminals). Additionally, a new philosophy for analog and digital codesign is introduced, which may help to accelerate the appearance of mobile software defined radio terminals.
Interactive presentation: An FPGA based all-digital transmitter with radio frequency output for software defined radio In this paper, we present the architecture and implementation of an all-digital transmitter with radio frequency output targeting an FPGA device. FPGA devices have been widely adopted in the applications of digital signal processing (DSP) and digital communication. They are typically well suited for the evolving technology of software defined radios (SDR) due to their reconfigurability and programmability. However, FPGA devices are mostly used to implement digital baseband and intermediate frequency (IF) functionalities. Therefore, significant analog and RF components are still needed to fulfill the radio communication requirements. The all-digital transmitter presented in this paper directly synthesizes RF signal in the digital domain, therefore eliminates the need for most of the analog and RF components. The all-digital transmitter consists of one QAM modulator and one RF pulse width modulator (RFPWM). The binary output waveform from RFPWM is centered at 800MHz with 64QAM signaling format. The entire transmitter is implemented using Xilinx Virtex2pro device with on chip multi-gigabit transceiver (MGT). The adjacent channel leakage ratio (ACLR) measured in the 20 MHz passband is 45dB, and the measured error vector magnitude (EVM) is less than 1%. Our work extends the digital implementation of communication applications on an FPGA platform to radio frequency, therefore making a significant evolution towards an ideal SDR.
The real-time segmentation of indoor scene based on RGB-D sensor The vision system of the mobile robot is a low-level function that provides the required target information of the current environment for the upper vision tasks. The real-time performance and robustness of object segmentation in cluttered environments is still a serious problem in robot visions. In this paper, a new real-time indoor scene segmentation method based on RGB-D image, is presented and the extracted primary object regions are then used for object recognition. Firstly, this paper accomplishes the depth filtering by the improved traditional filtering method. Then by using improved depth information, the algorithm extracts the foreground and implements the object segmentation of color image at the resolution of 640×480 from Kinect camera. Finally, the segmentation results are applied into the object recognition in indoor scene to validate the effectiveness of scene segmentation. The results of indoor segmentation demonstrate the real-time performance and robustness of the proposed method. In addition, the results of segmentation improve the accuracy of object recognition and reduce time of object recognition in indoor cluttered scene.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.24
0.24
0.24
0.24
0.24
0.12
0.02
0.002222
0
0
0
0
0
0
DDJ-Adaptive SAR TDC-Based Timing Recovery for Multilevel Signaling. This paper describes a low-latency and low-power bimodal non-return-to-zero (NRZ) and pulse-amplitude modulation (PAM)-4 timing recovery circuit. This architecture reduces latency and power consumption by eliminating the need for data equalization in the timing recovery path in intersymbol interference (ISI)-limited channels. It directly equalizes data-dependent jitter (DDJ) by adaptively shifting...
A 32-Gb/s 0.46-pJ/bit PAM4 CDR Using a Quarter-Rate Linear Phase Detector and a Self-Biased PLL-Based Multiphase Clock Generator This article presents a four-level pulse-amplitude modulation (PAM4) quarter-rate clock and data recovery circuit (CDR). A quarter-rate linear phase detector (QLPD) is proposed to reduce the recovered clock jitter by removing the dithering jitter of the bang-bang PD. A self-biased phase-locked loop (PLL)-based multiphase clock generator (MCG) with a very wide loop bandwidth (around 600 MHz) is proposed to reduce the MCG power consumption and generate a low-jitter multiphase clock for the quarter-rate operation. Fabricated in a 40-nm CMOS process, the prototype achieves a bit efficiency of 0.46 pJ/bit at 32-Gb/s input data rate. The measured jitter tolerance (JTOL) at the bit error rate (BER) of < 10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">−12</sup> is higher than 0.35 UI <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">PP</sub> with the corner frequency at about 10 MHz. The measured integrated jitter of the 4-GHz recovered clock is 352.6 fs.
A 56-Gb/s 8-mW PAM4 CDR/DMUX with High Jitter Tolerance An analog one-eighth-rate CDR circuit detects both major and minor transitions in PAM4 data by calculating the Euclidean distances between the sampled points. Realized in 28-nm CMOS technology, the prototype exhibits a jitter transfer bandwidth of 160 MHz and a jitter tolerance of 1 UI at 10 MHz.
Modeling and Design of Multilevel Bang–Bang CDRs in the Presence of ISI and Noise Multilevel clock-and-data recovery (CDR) systems are analyzed, modeled, and designed. A stochastic analysis provides probability density functions that are used to estimate the effect of intersymbol interference (ISI) and additive white noise on the characteristics of the phase detector (PD) in the CDR. A slope detector based novel multilevel bang-bang CDR architecture is proposed and modeled usin...
The Truth About 2-Level Transition Elimination in Bang-Bang PAM-4 CDRs Reception of 4-level pulse amplitude modulation (PAM-4) requires a clock and data recovery (CDR) circuit, typically implemented by a PLL-like structure. An essential block in such a CDR is the phase detector which should detect whether the recovered clock leads or lags the incoming data edges. In typical implementations an incoming data edge is detected by sensing whether the incoming waveform crosses a data threshold level. However, there is some ambiguity in detecting the incoming data edge because PAM-4 modulation has 3 thresholds. If the waveform crosses multiple threshold levels, the level crossings will occur at different time instants due to the finite rise/fall time of the incoming waveform. In this work, we first analyze qualitatively and quantitatively CDR systems that use one threshold for phase adjustment. Here, eliminating the 2-level transitions decreases the amount of jitter injected by the phase detector. However, the available transitions for phase adjustment are also reduced, which lowers the CDR's robustness. Secondly, for CDR systems using three thresholds, a combination of two techniques: majority voting and elimination of 2-level transitions is investigated. We prove that in this case, the elimination of 2-level transitions is not needed and even gives a worse performance when implemented.
Channel Selection at RF Using Miller Bandpass Filters Channel selection at the input of RF receivers can considerably relax linearity requirements, leading to low-power, compact implementations. A GSM/WCDMA/802.11b/g receiver incorporates a Miller bandpass filter and its variants to achieve a channel bandwidth from 350 kHz to 20 MHz and a noise figure of 2.9 dB while consuming 20 mW. Fabricated in 65 nm CMOS technology, the receiver withstands a 0 dBm blocker at 20 MHz offset and exhibits a noise figure of 5.1 dB.
A 3.36-GHz Locking-Tuned Type-I Sampling PLL With −78.6-dBc Reference Spur Merging Single-Path Reference-Feedthrough-Suppression and Narrow-Pulse-Shielding Techniques This brief describes a type-I analog sampling phase-locked loop (S-PLL) featuring reference-feedthrough-suppression and narrow-pulse-shielding techniques in a single path to improve the reference (REF) spur. Specifically, we realize the former by inserting a T-shape switch with one center-tap ground, while the latter tackles the voltage ripple caused by the sampling non-idealities. Also, we can tu...
A study of phase noise in colpitts and LC-tank CMOS oscillators This paper presents a study of phase noise in CMOS Colpitts and LC-tank oscillators. Closed-form symbolic formulas for the 1/f2 phase-noise region are derived for both the Colpitts oscillator (either single-ended or differential) and the LC-tank oscillator, yielding highly accurate results under very general assumptions. A comparison between the differential Colpitts and the LC-tank oscillator is also carried out, which shows that the latter is capable of a 2-dB lower phase-noise figure-of-merit (FoM) when simplified oscillator designs and ideal MOS models are adopted. Several prototypes of both Colpitts and LC-tank oscillators have been implemented in a 0.35-μm CMOS process. The best performance of the LC-tank oscillators shows a phase noise of -142dBc/Hz at 3-MHz offset frequency from a 2.9-GHz carrier with a 16-mW power consumption, resulting in an excellent FoM of ∼189 dBc/Hz. For the same oscillation frequency, the FoM displayed by the differential Colpitts oscillators is ∼5 dB lower.
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
A study of phase noise in CMOS oscillators This paper presents a study of phase noise in two inductorless CMOS oscillators. First-order analysis of a linear oscillatory system leads to a noise shaping function and a new definition of . A linear model of CMOS ring oscillators is used to calculate their phase noise, and three phase noise phenomena, namely, additive noise, high-frequency multiplicative noise, and low-frequency multiplicative noise, are identified and formulated. Based on the same concepts, a CMOS relaxation oscillator is also analyzed. Issues and techniques related to simulation of noise in the time domain are described, and two prototypes fabricated in a 0.5- m CMOS technology are used to investigate the accuracy of the theoretical predictions. Compared with the measured results, the calculated phase noise values of a 2-GHz ring oscillator and a 900-MHz relaxation oscillator at 5 MHz offset have an error of approximately 4 dB. OLTAGE-CONTROLLED oscillators (VCO's) are an integral part of phase-locked loops, clock recovery cir- cuits, and frequency synthesizers. Random fluctuations in the output frequency of VCO's, expressed in terms of jitter and phase noise, have a direct impact on the timing accuracy where phase alignment is required and on the signal-to-noise ratio where frequency translation is performed. In particular, RF oscillators employed in wireless tranceivers must meet stringent phase noise requirements, typically mandating the use of passive LC tanks with a high quality factor . However, the trend toward large-scale integration and low cost makes it desirable to implement oscillators monolithically. The paucity of literature on noise in such oscillators together with a lack of experimental verification of underlying theories has motivated this work. This paper provides a study of phase noise in two induc- torless CMOS VCO's. Following a first-order analysis of a linear oscillatory system and introducing a new definition of , we employ a linearized model of ring oscillators to obtain an estimate of their noise behavior. We also describe the limitations of the model, identify three mechanisms leading to phase noise, and use the same concepts to analyze a CMOS relaxation oscillator. In contrast to previous studies where time-domain jitter has been investigated (1), (2), our analysis is performed in the frequency domain to directly determine the phase noise. Experimental results obtained from a 2-GHz ring oscillator and a 900-MHz relaxation oscillator indicate that, despite many simplifying approximations, lack of accurate MOS models for RF operation, and the use of simple noise
The rainbow skip graph: a fault-tolerant constant-degree distributed data structure We present a distributed data structure, which we call the rainbow skip graph. To our knowledge, this is the first peer-to-peer data structure that simultaneously achieves high fault-tolerance, constant-sized nodes, and fast update and query times for ordered data. It is a non-trivial adaptation of the SkipNet/skip-graph structures of Harvey et al. and Aspnes and Shah, so as to provide fault-tolerance as these structures do, but to do so using constant-sized nodes, as in the family tree structure of Zatloukal and Harvey. It supports successor queries on a set of n items using O(log n) messages with high probability, an improvement over the expected O(log n) messages of the family tree. Our structure achieves these results by using the following new constructs:• Rainbow connections: parallel sets of pointers between related components of nodes, so as to achieve good connectivity between "adjacent" components, using constant-sized nodes.• Hydra components: highly-connected, highly fault-tolerant components of constant-sized nodes, which will contain relatively large connected subcomponents even under the failure of a constant fraction of the nodes in the component.We further augment the hydra components in the rainbow skip graph by using erasure-resilient codes to ensure that any large subcomponent of nodes in a hydra component is sufficient to reconstruct all the data stored in that component. By carefully maintaining the size of related components and hydra components to be O(log n), we are able to achieve fast times for updates and queries in the rainbow skip graph. In addition, we show how to make the communication complexity for updates and queries be worst case, at the expense of more conceptual complexity and a slight degradation in the node congestion of the data structure.
Clocking Analysis, Implementation and Measurement Techniques for High-Speed Data Links—A Tutorial The performance of high-speed wireline data links depend crucially on the quality and precision of their clocking infrastructure. For future applications, such as microprocessor systems that require terabytes/s of aggregate bandwidth, signaling system designers will have to become even more aware of detailed clock design tradeoffs in order to jointly optimize I/O power, bandwidth, reliability, silicon area and testability. The goal of this tutorial is to assist I/O circuit and system designers in developing intuitive and practical understanding of I/O clocking tradeoffs at all levels of the link hierarchy from the circuit-level implementation to system-level architecture.
Exploration of Constantly Connected Dynamic Graphs Based on Cactuses. We study the problem of exploration by a mobile entity (agent) of a class of dynamic networks, namely constantly connected dynamic graphs. This problem has already been studied in the case where the agent knows the dynamics of the graph and the underlying graph is a ring of n vertices [5]. In this paper, we consider the same problem and we suppose that the underlying graph is a cactus graph (a connected graph in which any two simple cycles have at most one vertex in common). We propose an algorithm that allows the agent to explore these dynamic graphs in at most 2(O)(root log n)(n) time units. We show that the lower bound of the algorithm is 2(Omega)(root log n)(n) time units.
Robust Biopotential Acquisition via a Distributed Multi-Channel FM-ADC. This contribution presents an active electrode system for biopotential acquisition using a distributed multi-channel FM-modulated analog front-end and ADC architecture. Each electrode captures one biopotential signal and converts to a frequency modulated signal using a VCO tuned to a unique frequency. Each electrode then buffers its output onto a shared analog line that aggregates all of the FM-mo...
1.11
0.12
0.12
0.1
0.1
0.04
0.01
0.001778
0
0
0
0
0
0
A high-efficiency, wide workload range, digital off-time modulation (DOTM) DC-DC converter with asynchronous power saving technique Conventionally for wide workload range applications, to keep good stability and high efficiency, a switching converter with multi-mode operation is necessary. With the advanced digital signal processing, this work presents an asynchronous digital controller with dynamic power saving technique to achieve high power efficiency. The regulation is based on the off-time modulation, in which an adaptive resolution adjustment is proposed for the extension toward light-loaded range. The DC-DC converter is fabricated in a 0.18-µm CMOS process. The input voltage is from 2.7 to 3.6V and the regulated output is 1.8V. The switching frequency is from 44 kHz to 1.65 MHz and the maximum output ripple is 20 mV with a 10-µF capacitor and a 2.2-µH inductor. The power efficiency is higher than 91% for the workload range from 3 to 400 mA.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A mixed finite element method for a time-fractional fourth-order partial differential equation In this paper, a numerical theory based on the mixed finite element method for a time-fractional fourth-order partial differential equation (PDE) is presented and analyzed. An auxiliary variable @s=@Du is introduced, then the fourth-order equation can be split into the coupled system of two second-order equations. The time Caputo-fractional derivative is discretized by a finite difference method and the spatial direction is approximated by the mixed finite element method. The stabilities based on a priori analysis for two variables are discussed and some a priori error estimates in L^2-norm for the scalar unknown u and the variable @s=@Du, are derived, respectively. Moreover, an a priori error result in H^1-norm for the scalar unknown u also is proved. For verifying the theoretical analysis, a numerical test is made by using Matlab procedure.
Consensus-based control for a network of diffusion PDEs with boundary local interaction In this technical note the problem of driving the state of a network of identical agents, modeled by boundary-controlled heat equations, towards a common steady-state profile is addressed. Decentralized consensus protocols are proposed to address two distinct problems. The first problem is that of steering the states of all agents towards the same constant steady-state profile which corresponds to the spatial average of the agents initial condition. The second problem deals with the case where the controlled boundaries of the agents dynamics are corrupted by additive persistent disturbances. To achieve synchronization between agents, while completely rejecting the effect of the boundary disturbances, a nonlinear sliding-mode based consensus protocol is proposed. Simulation results are presented to support the effectiveness of the proposed algorithms.
Fractional modeling of viscoelasticity in 3D cerebral arteries and aneurysms. We develop efficient numerical methods for fractional order PDEs, and employ them to investigate viscoelastic constitutive laws for arterial wall mechanics. Recent simulations using one-dimensional models 1 have indicated that fractional order models may offer a more powerful alternative for modeling the arterial wall response, exhibiting reduced sensitivity to parametric uncertainties compared with the integer-calculus-based models. Here, we study three-dimensional (3D) fractional PDEs that naturally model the continuous relaxation properties of soft tissue, and for the first time employ them to simulate flow structure interactions for patient-specific brain aneurysms. To deal with the high memory requirements and in order to accelerate the numerical evaluation of hereditary integrals, we employ a fast convolution method 2 that reduces the memory cost to O ( log ź ( N ) ) and the computational complexity to O ( N log ź ( N ) ) . Furthermore, we combine the fast convolution with high-order backward differentiation to achieve third-order time integration accuracy. We confirm that in 3D viscoelastic simulations, the integer order models strongly depends on the relaxation parameters, while the fractional order models are less sensitive. As an application to long-time simulations in complex geometries, we also apply the method to modeling fluid-structure interaction of a 3D patient-specific compliant cerebral artery with an aneurysm. Taken together, our findings demonstrate that fractional calculus can be employed effectively in modeling complex behavior of materials in realistic 3D time-dependent problems if properly designed efficient algorithms are employed to overcome the extra memory requirements and computational complexity associated with the non-local character of fractional derivatives.
Mayer-Type Optimal Control of Probabilistic Boolean Control Network With Uncertain Selection Probabilities. This article considers a Mayer-type optimal control problem of probabilistic Boolean control networks (PBCNs) with uncertainty on selection probabilities which obey Beta probabilistic distributions. The expectation with respect to both the selection probabilities and the transitions of state variables is set as a cost function, and it deduces an equivalent formulation as a multistage decision prob...
Global Mittag–Leffler Stability of the Delayed Fractional-Coupled Reaction-Diffusion System on Networks Without Strong Connectedness In this article, we mainly consider the existence of solutions and global Mittag–Leffler stability of delayed fractional-order coupled reaction-diffusion neural networks without strong connectedness. Using the Leary–Schauder’s fixed point theorem and the Lyapunov method, some criteria for the existence of solutions and global Mittag–Leffler stability are given. Finally, the correctness of the theory is verified by a numerical example.
Conflict resolution for air traffic management: a study in multiagent hybrid systems Air traffic management (ATM) of the future allows for the possibility of free flight, in which aircraft choose their own optimal routes, altitudes, and velocities. The safe resolution of trajectory conflicts between aircraft is necessary to the success of such a distributed control system. In this paper, we present a method to synthesize provably safe conflict resolution manoeuvres. The method models the aircraft and the manoeuvre as a hybrid control system and calculates the maximal set of safe initial conditions for each aircraft so that separation is assured in the presence of uncertainties in the actions of the other aircraft. Examples of manoeuvres using both speed and heading changes are worked out in detail
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
Broadband MIMO-OFDM Wireless Communications Orthogonal frequency division multiplexing (OFDM) is a popular method for high data rate wireless transmission. OFDM may be combined with antenna arrays at the transmitter and receiver to increase the diversity gain and/or to enhance the system capacity on time-varying and frequency-selective channels, resulting in a multiple-input multiple-output (MIMO) configuration. The paper explores various p...
Supporting Aggregate Queries Over Ad-Hoc Wireless Sensor Networks We show how the database community's notion of a generic query interface for data aggregation can be applied to ad-hoc networks of sensor devices. As has been noted in the sensor network literature, aggregation is important as a data reduction tool; networking approaches, however, have focused on application specific solutions, whereas our in-network aggregation approach is driven by a general purpose, SQL-style interface that can execute queries over any type of sensor data while providing opportunities for significant optimization. We present a variety of techniques to improve the reliability and performance of our solution. We also show how grouped aggregates can be efficiently computed and offer a comparison to related systems and database projects.
MPFR: A multiple-precision binary floating-point library with correct rounding This article presents a multiple-precision binary floating-point library, written in the ISO C language, and based on the GNU MP library. Its particularity is to extend to arbitrary-precision, ideas from the IEEE 754 standard, by providing correct rounding and exceptions. We demonstrate how these strong semantics are achieved---with no significant slowdown with respect to other arbitrary-precision tools---and discuss a few applications where such a library can be useful.
RockSalt: better, faster, stronger SFI for the x86 Software-based fault isolation (SFI), as used in Google's Native Client (NaCl), relies upon a conceptually simple machine-code analysis to enforce a security policy. But for complicated architectures such as the x86, it is all too easy to get the details of the analysis wrong. We have built a new checker that is smaller, faster, and has a much reduced trusted computing base when compared to Google's original analysis. The key to our approach is automatically generating the bulk of the analysis from a declarative description which we relate to a formal model of a subset of the x86 instruction set architecture. The x86 model, developed in Coq, is of independent interest and should be usable for a wide range of machine-level verification tasks.
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
Current-mode adaptively hysteretic control for buck converters with fast transient response and improved output regulation This paper presents a current-mode adaptively hysteretic control (CMAHC) technique to achieve the fast transient response for DC-DC buck converters. A complementary full range current sensor comprising of chargingpath and discharging-path sensing transistors is proposed to track the inductor current seamlessly. With the proposed current-mode adaptively hysteretic topology, the inductor current is continuously monitored, and the adaptively hysteretic threshold is dynamically adjusted according to the feedback information comes from the output voltage level. Therefore, a fast load-transient response can be achieved. Besides, the output regulation performance is also improved by the proposed dynamic current-scaling circuitry (DCSC). Moreover, the proposed CMAHC topology can be used in a nearly zero RESR design configuration. The prototype fabricated using TSMC 0.25μm CMOS process occupies the area of 1.78mm2 including all bonding pads. Experimental results show that the output voltage ripple is smaller than 30mV over a wide loading current from 0 mA to 500 mA with maximum power conversion efficiency higher than 90%. The recovery time from light to heavy load (100 to 500 mA) is smaller than 5μs.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
An Efficient Mixed-Signal 2.4-GHz Polar Power Amplifier in 65-nm CMOS Technology A 65-nm digitally modulated polar transmitter incorporates a fully integrated, efficient 2.4-GHz switching Inverse Class-D power amplifier. Low-power digital filtering on the amplitude path helps remove spectral images for coexistence. The transmitter integrates the complete LO distribution network and digital drivers. Operating from a 1-V supply, the PA has 21.8-dBm peak output power with 44% efficiency. Simple static predistortion helps the transmitter meet EVM and mask requirements of 802.11g 54-Mb/s WLAN data with 18% average efficiency.
Linearized Dual-Band Power Amplifiers With Integrated Baluns in 65 nm CMOS for a 2 2 802.11n MIMO WLAN SoC Fully integrated dual-band power amplifiers with on-chip baluns for 802.11n MIMO WLAN applications are presented. With a 3.3 V supply, the PAs produce a saturated output power of 28.3 dBm and 26.7 dBm with peak drain efficiency of 35.3% and 25.3% for the 2.4 GHz and 5 GHz bands, respectively. By utilizing multiple fully self-contained linearization algorithms, an EVM of -25 dB is achieved at 22.4 dBm for the 2.4 GHz band and 20.5 dBm for the 5 GHz band while transmitting 54 Mbs OFDM. The chip is fabricated in standard 65 nm CMOS and the PAs occupy 0.31 mm2 (2.4 GHz) and 0.27 mm2 (5 GHz) area. To examine the reliability of the PAs, accelerated aging tests are performed for several hundreds parts without a single failure.
Digitally-Controlled Polar Transmitter Using a Watt-Class Current-Mode Class-D CMOS Power Amplifier and Guanella Reverse Balun for Handset Applications A digitally-controlled polar transmitter with a watt-class CMOS power amplifier is demonstrated, implemented in a 0.15 μm RF CMOS process. Stacked FETs in a current-mode class-D configuration are used to obtain high breakdown voltage and high efficiency in the output stage, and a doughnut-shaped Guanella reverse balun is applied to achieve a 1-to-4 impedance transformation with less than 1 dB insertion loss. The amplifier has 31 dBm output power with 51% drain efficiency at 0.75 GHz frequency under single tone testing. The output stage is fed by a buck converter employing digital pulsewidth modulation with 47 MHz pulse rate synchronized with a 3 GHz clock. Digital compensation techniques were developed to maintain linearity. WCDMA HPSK modulation was demonstrated using a pulse pattern generator-based measurement bench. Overall efficiency of 26.5% was achieved while maintaining ACLRs within 3GPP specifications at 24 dBm average output power.
SFDR-bandwidth limitations for high speed high resolution current steering CMOS D/A converters Although very high update rates are achieved in recent publications on high resolution D/A converters, the bottleneck in the design is to achieve a high spurious free output signal bandwidth. The influence of the dynamic output impedance on the chip performance has been analyzed and has been identified as an important limitation for the spurious free dynamic range (SFDR) of high resolution DAC's. Based on the presented analysis an optimized topology is proposed
A fully differential ultra-compact broadband transformer based quadrature generation scheme This paper presents an ultra-compact transformer-based quadrature generation scheme, which converts a differential input signal to fully differential quadrature outputs with low passive loss, broad bandwidth, and robustness against process variations. A new layout strategy is proposed to implement this 6-port transformer-based network within only one inductor-footprint for significant area saving. A 5 GHz quadrature generation design is implemented in a standard 65 nm CMOS process with a core area of only 260 μm by 260 μm, achieving size reduction of over 1,600 times compared to a 5GHz λ/4 branch-line coupler. This implementation achieves 0.82 dB signal loss at 5 GHz and maximum 3.8° phase error and ±0.5dB amplitude mismatch within a bandwidth of 13% (4.75 GHz to 5.41 GHz). Measurement results over 9 independent samples show a standard phase deviation of 1.9° verifying the robustness of the design.
CMOS Doherty Amplifier With Variable Balun Transformer and Adaptive Bias Control for Wireless LAN Application This paper presents a novel CMOS Doherty power amplifier (PA) with an impedance inverter using a variable balun transformer (VBT) and adaptive bias control of an auxiliary amplifier. Unlike a conventional quarter-wavelength (λ/4) transmission line impedance inverter of a Doherty PA, the proposed VBT impedance inverter can achieve load modulation without any phase delay circuit. As a result, a λ/4 phase compensation circuit at the input path of the auxiliary amplifier can be removed, and the total size of the Doherty PA can be reduced. Additionally, an enhancement of the power efficiency at backed-off power levels can successfully be achieved with an adaptive gate bias in a common gate stage of the auxiliary amplifier. The PA, fabricated with 0.13-μm CMOS technology, achieved a 1-dB compression point (P1 dB) of 31.9 dBm and a power-added efficiency (PAE) at P1 dB of 51%. When the PA is tested with 802.11g WLAN orthogonal frequency division multiplexing (OFDM) signal of 54 Mb/s, a 25-dB error vector magnitude (EVM) compliant output power of 22.8 dBm and a PAE of 30.1% are obtained, respectively.
Highly Efficient RF Transmitter Over Broad Average Power Range Using Multilevel Envelope-Tracking Power Amplifier We present a highly efficient RF transmitter over broad average power range using a multilevel envelope-tracking power amplifier (ML-ET PA). The ML-ET PA delivers enhanced efficiency at a back-off power region for handset applications. The supply modulator consists of a linear regulator and a switching converter. The DC supply of the linear regulator is adjusted according to the average power of the envelope signal, and the power-supply-independent class-AB output stage is employed to avoid the crossover distortion generated by the different DC supply voltages. The switch current level is not optimally adjusted by itself following the power back-off level, because the DC supply voltages of the linear regulator and switching converter are different. For the optimum operation over the entire power region, the switch current level is adjusted by detecting the input envelope voltage level. For a 20-MHz long term evolution signal with a 7.5 dB peak-to-average power ratio, the proposed supply modulator delivers a peak voltage of 4.5 V to a 6.5 load with a measured efficiency of 75.9%. The proposed ET PA delivers a power-added efficiency (PAE) of 40%, gain of 28.8 dB, evolved universal terrestrial radio access adjacent channel leakage ratio of 35.3 dBc, and error vector magnitude of 3.23% at an average output power of 27 dBm and an operating frequency of 1.71-GHz. At a 10 dB back-off point, the PAE is improved from 14.5% to 18.7% compared to the conventional ET PA.
A Class-G Supply Modulator and Class-E PA in 130 nm CMOS A class-G supply modulator utilizes parallel low- dropout (LDO) regulators that are controlled by comparators and negative feedback. It optimizes the power consumption of a nonlinear power amplifier (PA) operating with supply modulation, such that it draws current from one of multiple appropriately sized supply voltages as determined by the input signal envelope. The class-G modulator is used in conjunction with a class-E PA operating in an envelope elimination and restoration (EER) mode to efficiently amplify signals with large peak-to-average ratios. The measured maximum output power and power added efficiency (PAE) are 29.3 dBm and 69%, respectively. The class-G technique is demonstrated for a 64 QAM, OFDM input signal (symbol period = 4 mus) wherein the measured error vector magnitude (EVM) is 2.5% and the average efficiency of 22.6%.
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
Wireless systems and interference avoidance Motivated by the emergence of programmable radios, we seek to understand a new class of communication system where pairs of transmitters and receivers can adapt their modulation/demodulation method in the presence of interference to achieve better performance. Using signal to interference ratio as a metric and a general signal space approach, we present a class of iterative distributed algorithms for synchronous systems which results in an ensemble of optimal waveforms for multiple users connected to a common receiver (or colocated independent receivers). That is, the waveform ensemble meets the Welch (1974) bound with equality and, therefore, achieves minimum average interference over the ensemble of signature waveforms. We derive fixed points for a number of scenarios, provide examples, look at ensemble stability under user addition and deletion as well as provide a simplistic comparison to synchronous code-division multiple-access. We close with suggestions for future work
A 14-mW 6.25-Gb/s Transceiver in 90-nm CMOS This paper describes a 6.25-Gb/s 14-mW transceiver in 90-nm CMOS for chip-to-chip applications. The transceiver employs a number of features for reducing power consumption, including a shared LC-PLL clock multiplier, an inductor-loaded resonant clock distribution network, a low- and programmable-swing voltage-mode transmitter, software-controlled clock and data recovery (CDR) and adaptive equaliza...
LSB Dithering in MASH Delta–Sigma D/A Converters Theoretical sufficient conditions are presented that ensure that the quantization noise from every constituent digital delta-sigma (DeltaSigma) modulator in a multistage digital DeltaSigma modulator is asymptotically white and uncorrelated with the input. The conditions also determine if spectral shape can be imparted to the dither's contribution to the power spectral density of the multistage digital DeltaSigma modulator's output. A large class of popular multistage digital DeltaSigma modulators that satisfy the conditions are identified and tabulated for easy reference
NDC: Analyzing the impact of 3D-stacked memory+logic devices on MapReduce workloads While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.021389
0.024757
0.01922
0.018333
0.018184
0.012547
0.008964
0.005552
0
0
0
0
0
0
Design Considerations for Interleaved ADCs. Interleaving can relax the power-speed tradeoffs of analog-to-digital converters and reduce their metastability error rate while increasing the input capacitance. This paper quantifies the benefits and derives an upper bound on the performance by considering kT/C noise and slewing requirements of the circuit driving the system. A frequency-domain analysis of interleaved converters is also presente...
A 1.6-GS/s 12.2-mW Seven-/Eight-Way Split Time-Interleaved SAR ADC Achieving 54.2-dB SNDR With Digital Background Timing Mismatch Calibration This article presents a split time-interleaved (TI) successive-approximation register (SAR) analog-to-digital converter (ADC) with digital background timing-skew mismatch calibration. It divides a TI-SAR ADC into two split parts with the same overall sampling rate but different numbers of TI channels. Benefitting from the proposed split TI topology, the timing-skew calibration convergence speed is fast without any extra analog circuits. The input impedance of the overall TI-ADC remains unchanged, which is essential for the preceding driving stage in a high-speed application. We designed a prototype seven-/eight-way split TI-ADC implemented in 28-nm CMOS. After a digital background timing-skew calibration, it reaches a 54.2-dB signal-to-noise-and-distortion ratio (SNDR) and 67.1-dB spurious free dynamic range (SFDR) with a near Nyquist rate input signal and a 2.5-GHz effective resolution bandwidth (ERBW). Furthermore, the power consumption of ADC core (mismatch calibration off-chip) is 12.2-mW running at 1.6 GS/s, leading to a Walden figure-of-merit (FOM) of 18.2 fJ/conv.-step and a Schreier FOM of 162.4 dB, respectively.
A Two-Way Interleaved 7-b 2.4-GS/s 1-Then-2 b/Cycle SAR ADC With Background Offset Calibration. This paper presents a 2× time-interleaved 7-b 2.4-GS/s 1-then-2 b/cycle SAR ADC in 28-nm CMOS. The process-voltage-temperature sensitivity of a multi-bit SAR architecture has been improved by the proposed 1-then-2 b/cycle scheme with background offset calibration. With the pre-charge reduction scheme, the traditional large switching energy and time consuming pre-charge operation have been removed,...
All-Digital Blind Background Calibration Technique for Any Channel Time-Interleaved ADC. This paper proposes a novel digital adaptive blind background calibration technique for the gain, timing skew, and offset mismatch errors in a time-interleaved analog-to-digital converter (TI-ADC). Based on the frequency-shifted basis functions generated only from the measured TI-ADC output, the three mismatch errors can be represented, extracted, and then subtracted from the TI-ADC output adaptiv...
Digitally Enhanced Wideband I/Q Downconversion Receiver With 2-Channel Time-Interleaved ADCs The interesting concept of employing in-phase/quadrature (I/Q) downconversion together with time-interleaved analog-to-digital converters allows digitizing very wide instantaneous radio-frequency (RF) bandwidths (BWs) and grants enhanced flexibility in accessing and processing the RF spectrum. Such a structure also inevitably suffers from performance degradation due to the analog components' nonidealities, e.g., frequency response mismatches (FRMs), that ultimately lead to spurious mismatch components that limit the system's dynamic range. Available solutions developed for I/Q mismatches or time-interleaving mismatches alone are not compatible for correcting the FRM spurs in such joint I/Q time-interleaved converter (IQ-TIC) architecture. This brief proposes novel blind FRM identification and correction solutions that are able to suppress all the associated spurs in the considered wideband IQ-TIC architecture. The proposed digital correction solutions are tested and verified using measured hardware data obtained from an experimental platform, exhibiting good FRM spur correction performance with instantaneous BW on the order of 800 MHz. These developments pave the way toward 5G radio communication devices and systems where instantaneous BWs on the order of 1 GHz are envisioned at centimeter- and millimeter-wave frequency bands.
Low complexity digital background calibration algorithm for the correction of timing mismatch in time-interleaved ADCs. A low-complexity post-processing algorithm to estimate and compensate for timing skew error in a four-channel time-interleaved analog to digital converter (TIADC) is presented in this paper, together with its hardware implementation. The Lagrange interpolator is used as the reconstruction filter which alleviates online interpolator redesign by using a simplified representation of coefficients. Simulation results show that the proposed algorithm can suppress error tones for input signal frequency from 0 to 0.4fs. The proposed structure has, at least, 41% reduction in the number of required multipliers. Implementation of the algorithm, for a four-channel 10-bit TIADC, show that, for a 0.4fs input signal frequency, the Signal to Noise and Distortion Ratio (SNDR) and Spurious-Free Dynamic Range (SFDR) are improved 31.26 dB and 43.7 dB, respectively. Our proposed approximation technique does not degrade the performance of system, resulting in the same SNDR and SFDR as the exact coefficient values. In addition, the proposed structure provides an acceptable performance in the presence of wideband signals.
A Polynomial-Based Time-Varying Filter Structure for the Compensation of Frequency-Response Mismatch Errors in Time-Interleaved ADCs This paper introduces a structure for the compensation of frequency-response mismatch errors in M-channel time-interleaved analog-to-digital converters (ADCs). It makes use of a number of fixed digital filters, approximating differentiators of different orders, and a few variable multipliers that correspond to parameters in polynomial models of the channel frequency responses. Whenever the channel frequency responses change, which occurs from time to time in a practical time-interleaved ADC, it suffices to alter the values of these variable multipliers. In this way, expensive on-line filter design is avoided. The paper includes several design examples that illustrate the properties and capabilities of the proposed structure.
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
Robust Stochastic Approximation Approach to Stochastic Programming In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on Monte Carlo sampling techniques, namely, the stochastic approximation (SA) and the sample average approximation (SAA) methods. Both approaches, the SA and SAA methods, have a long history. Current opinion is that the SAA method can efficiently use a specific (say, linear) structure of the considered problem, while the SA approach is a crude subgradient method, which often performs poorly in practice. We intend to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems. We extend the analysis to the case of convex-concave stochastic saddle point problems and present (in our opinion highly encouraging) results of numerical experiments.
File Transfer Protocol
Dynamic sensor collaboration via sequential Monte Carlo We consider the application of sequential Monte Carlo (SMC) methods for Bayesian inference to the problem of information-driven dynamic sensor collaboration in clutter environments for sensor networks. The dynamics of the system under consideration are described by nonlinear sensing models within randomly deployed sensor nodes. The exact solution to this problem is prohibitively complex due to the nonlinear nature of the system. The SMC methods are, therefore, employed to track the probabilistic dynamics of the system and to make the corresponding Bayesian estimates and predictions. To meet the specific requirements inherent in sensor network, such as low-power consumption and collaborative information processing, we propose a novel SMC solution that makes use of the auxiliary particle filter technique for data fusion at densely deployed sensor nodes, and the collapsed kernel representation of the a posteriori distribution for information exchange between sensor nodes. Furthermore, an efficient numerical method is proposed for approximating the entropy-based information utility in sensor selection. It is seen that under the SMC framework, the optimal sensor selection and collaboration can be implemented naturally, and significant improvement is achieved over existing methods in terms of localizing and tracking accuracies.
A high efficiency and compact size 65nm power management module with 1.2v low-voltage PWM controller for UWB system application
Scheduling Analysis of TDMA-Constrained Tasks: Illustration with Software Radio Protocols In this paper a new task model is proposed for scheduling analysis of dependent tasks in radio stations that embed a TDMA communication protocol. TDMA is a channel access protocol that allows several stations to communicate in a same network, by dividing time into several time slots. Tasks handling the TDMA radio protocol are scheduled in a manner to be compliant with the TDMA configuration: task parameters such as execution times, deadlines and release times are constrained by TDMA slots. The periodic task model, commonly used in scheduling analysis, is inefficient for the accurate specification of such systems, resulting in pessimistic scheduling analysis results. To encompass this issue, this paper proposes a new task model called Dependent General Multiframe (DGMF). This model extends the existing GMF model with precedence dependency and shared resource synchronization. We show how to perform scheduling analysis with DGMF by transforming it into a transaction model and using a schedulability test we proposed. In this paper we experiment on "software radio protocols" from Thales Communications & Security, which are representative of the system we want to analyze. Experimental results show an improvement of system schedulability using the proposed analysis technique, compared to existing ones (GMF and periodic tasks). The new task model thus provides a technique to model and analyze TDMA systems with less pessimistic results.
A 0.5 V 10-bit 3 MS/s SAR ADC With Adaptive-Reset Switching Scheme and Near-Threshold Voltage-Optimized Design Technique This brief presents a 10-bit ultra-low power energy-efficient successive approximation register (SAR) analog-to-digital converter (ADC). A new adaptive-reset switching scheme is proposed to reduce the switching energy of the capacitive digital-to-analog converter (CDAC). The proposed adaptive-reset switching scheme reduces the average switching energy of the CDAC by 90% compared to the conventional scheme without the common-mode voltage variation. In addition, the near-threshold voltage (NTV)-optimized digital library is adopted to alleviate the performance degradation in the ultra-low supply voltage while simultaneously increasing the energy efficiency. The NTV-optimized design technique is also introduced to the bootstrapped switch design to improve the linearity of the sample-and-hold circuit. The test chip is fabricated in a 65 nm CMOS, and its core area is 0.022 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . At a supply of 0.5 V and sampling speed of 3 MS/s, the SAR ADC achieves an ENOB of 8.78 bit and consumes <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3.09~{\boldsymbol{\mu }}\text{W}$ </tex-math></inline-formula> . The resultant Walden figure-of-merit (FoM) is 2.34 fJ/conv.-step.
1.05
0.05
0.05
0.05
0.05
0.05
0.025
0
0
0
0
0
0
0
A Monolithic Voltage-Mode Dc-Dc Converter With A Novel Oscillator And Ramp Generator A voltage-mode DC-DC converter with a novel oscillator and ramp generator is presented. The proposed oscillator and ramp generator has a simple structure and does not need an external reference signal generator to set the upper and lower levels of the ramp signal. The DC-DC converter was fabricated in a standard 0.5 mu m CMOS process. The converter can operate from 465 kHz to 556 kHz with a supply voltage from 3 to 5.5 V, which is appropriate for portable electronic devices that are powered by a single-cell lithium-ion battery. The output ripple voltage is about 10mV with a 10 mu off-chip capacitor and a 10 mu H off-chip inductor. The power conversion efficiency is over 80% for load currents from 30 to 400 mA.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Brief Announcement: An Early-Stopping Protocol for Computing Aggregate Functions in Sensor Networks Nodes in a Sensor Network can collaborate to process the sensed data but, due to unreliability, a monitoring strategy can not rely on individual sensors values. Instead, the network should use aggregated information from groups of sensor nodes [2,3,7]. The topic of this work is the efficient computation of aggregate functions in the highly constrained Sensor Network setting, where node restrictions are modeled as in [4], the random node-deployment is modeled as a geometric graph, and the resulting topology, node identifiers assignment and the assignment of input-values to be aggregated is adversarial.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Chasing Carbon: The Elusive Environmental Footprint of Computing Given recent algorithm, software, and hardware innovation, computing has enabled a plethora of new applications. As computing becomes increasingly ubiquitous, however, so does its environmental impact. This article brings the issue to the attention of computer-systems researchers. Our analysis, built on industry-reported characterization, quantifies the environmental effects of computing in terms of carbon emissions. Broadly, carbon emissions have two sources: operational energy consumption, and hardware manufacturing and infrastructure. Although carbon emissions from the former are decreasing, thanks to algorithmic, software, and hardware innovations that boost performance and power efficiency, the overall carbon footprint of computer systems continues to grow. This work quantifies the carbon output of computer systems to show that most emissions related to modern mobile and data-center equipment come from hardware manufacturing and infrastructure. We, therefore, outline future directions for minimizing the environmental impact of computing systems.
Compiler algorithms for synchronization Translating program loops into a parallel form is one of the most important transformations performed by concurrentizing compilers. This transformation often requires the insertion of synchronization instructions within the body of the concurrent loop. Several loop synchronization techniques are presented first. Compiler algorithms to generate synchronization instructions for singly-nested loops are then discussed. Finally, a technique for the elimination of redundant synchronization instructions is presented.
A Software Scheme for Multithreading on CGRAs Recent industry trends show a drastic rise in the use of hand-held embedded devices, from everyday applications to medical (e.g., monitoring devices) and critical defense applications (e.g., sensor nodes). The two key requirements in the design of such devices are their processing capabilities and battery life. There is therefore an urgency to build high-performance and power-efficient embedded devices, inspiring researchers to develop novel system designs for the same. The use of a coprocessor (application-specific hardware) to offload power-hungry computations is gaining favor among system designers to suit their power budgets. We propose the use of CGRAs (Coarse-Grained Reconfigurable Arrays) as a power-efficient coprocessor. Though CGRAs have been widely used for streaming applications, the extensive compiler support required limits its applicability and use as a general purpose coprocessor. In addition, a CGRA structure can efficiently execute only one statically scheduled kernel at a time, which is a serious limitation when used as an accelerator to a multithreaded or multitasking processor. In this work, we envision a multithreaded CGRA where multiple schedules (or kernels) can be executed simultaneously on the CGRA (as a coprocessor). We propose a comprehensive software scheme that transforms the traditionally single-threaded CGRA into a multithreaded coprocessor to be used as a power-efficient accelerator for multithreaded embedded processors. Our software scheme includes (1) a compiler framework that integrates with existing CGRA mapping techniques to prepare kernels for execution on the multithreaded CGRA and (2) a runtime mechanism that dynamically schedules multiple kernels (offloaded from the processor) to execute simultaneously on the CGRA coprocessor. Our multithreaded CGRA coprocessor implementation thus makes it possible to achieve improved power-efficient computing in modern multithreaded embedded systems.
Domain Specialization Is Generally Unnecessary for Accelerators. Domain-specific accelerators (DSAs), which sacrifice programmability for efficiency, are a reaction to the waning benefits of device scaling. This article demonstrates that there are commonalities between DSAs that can be exploited with programmable mechanisms. The goals are to create a programmable architecture that can match the benefits of a DSA and to create a platform for future accelerator i...
PathSeeker: A Fast Mapping Algorithm for CGRAs Coarse-grained reconfigurable arrays (CGRAs) have gained traction over the years as a low-power accelerator due to the efficient mapping of the compute-intensive loops onto the 2-D array by the CGRA compiler. When encountering a mapping failure for a given node, existing mapping techniques either exit and retry the mapping anew, or perform backtracking, i.e., recursively remove the previously mapped node to find a valid mapping. Abandoning mapping and starting afresh can deteriorate the quality of mapping and the compilation time. Even backtracking may not be the best choice since the previous node may not be the incorrectly placed node. To tackle this issue, we propose PathSeeker - a mapping approach that analyzes mapping failures and performs local adjustments to the schedule to obtain a mapping. Experimental results on 35 top performance-critical loops from MiBench, Rodinia, and Parboil benchmark suites demonstrate that PathSeeker can map all of them with better mapping quality and dramatically less compilation time than the previous state-of-the-art approaches - GraphMinor and RAMP, which were unable to map 20 and 5 loops, respectively. Over these benchmarks, PathSeeker achieves 28% better performance at 550x compilation speedup over GraphMinor and 3% better performance at 10x compilation speedup over RAMP on a 4x4 CGRA.
OpenCGRA: An Open-Source Unified Framework for Modeling, Testing, and Evaluating CGRAs Coarse-grained reconfigurable arrays (CGRAs), loosely defined as arrays of functional units (e.g., adder, subtractor, multiplier, divider, or larger multi-operation units, but smaller than a general-purpose core) interconnected through a Network-on-Chip, provide higher flexibility than domain-specific ASIC accelerators while offering increased hardware efficiency with respect to fine-grained reconfigurable devices, such as Field Programmable Gate Arrays (FPGAs). The fast evolving fields of machine learning and edge computing, which are seeing a continuous flow of novel algorithms and larger models, make CGRAs ideal architectures to allow domain specialization without losing too much generality. Designing and generating a CGRA, however, still requires to define the type and number of the specific functional units, implement their interconnect and the network topology, and perform the simulation and validation, given a variety of workloads of interest. In this paper, we propose OpenC-GRA *, the first open-source integrated framework that is able to support the full top-to-bottom design flow for specializing and implementing CGRAs: modeling at different abstraction levels (functional level, cycle level, register-transfer level) with compiler support, verification at different granularities (unit testing, integration testing, property-based testing), simulation, generation of synthesizable Verilog, and characterization (area, power, and timing). By using OpenCGRA, it only takes a few hours to build a specialized power- and area-efficient CGRA throughout the entire design flow given a set of applications of interest. OpenCGRA is available online at https://github.com/pnnl/OpenCGRA.
A Fully Pipelined and Dynamically Composable Architecture of CGRA. Future processor chips will not be limited by the transistor resources, but will be mainly constrained by energy efficiency. Reconfigurable fabrics bring higher energy efficiency than CPUs via customized hardware that adapts to user applications. Among different reconfigurable fabrics, coarse-grained reconfigurable arrays (CGRAs) can be even more efficient than fine-grained FPGAs when bit-level customization is not necessary in target applications. CGRAs were originally developed in the era when transistor resources were more critical than energy efficiency. Previous work shares hardware among different operations via modulo scheduling and time multiplexing of processing elements. In this work, we focus on an emerging scenario where transistor resources are rich. We develop a novel CGRA architecture that enables full pipelining and dynamic composition to improve energy efficiency by taking full advantage of abundant transistors. Several new design challenges are solved. We implement a prototype of the proposed architecture in a commodity FPGA chip for verification. Experiments show that our architecture can fully exploit the energy benefits of customization for user applications in the scenario of rich transistor resources.
Randomized algorithms This text by two well-known experts in the field presents the basic concepts in the design and analysis of randomized algorithms at a level accessible to beginning graduate students, professionals and researchers.
The gem5 simulator The gem5 simulation infrastructure is the merger of the best aspects of the M5 [4] and GEMS [9] simulators. M5 provides a highly configurable simulation framework, multiple ISAs, and diverse CPU models. GEMS complements these features with a detailed and exible memory system, including support for multiple cache coherence protocols and interconnect models. Currently, gem5 supports most commercial ISAs (ARM, ALPHA, MIPS, Power, SPARC, and x86), including booting Linux on three of them (ARM, ALPHA, and x86). The project is the result of the combined efforts of many academic and industrial institutions, including AMD, ARM, HP, MIPS, Princeton, MIT, and the Universities of Michigan, Texas, and Wisconsin. Over the past ten years, M5 and GEMS have been used in hundreds of publications and have been downloaded tens of thousands of times. The high level of collaboration on the gem5 project, combined with the previous success of the component parts and a liberal BSD-like license, make gem5 a valuable full-system simulation tool.
PRESENT: An Ultra-Lightweight Block Cipher With the establishment of the AES the need for new block ciphers has been greatly diminished; for almost all block cipher applications the AES is an excellent and preferred choice. However, despite recent implementation advances, the AES is not suitable for extremely constrained environments such as RFID tags and sensor networks. In this paper we describe an ultra-lightweight block cipher, present. Both security and hardware efficiency have been equally important during the design of the cipher and at 1570 GE, the hardware requirements for presentare competitive with today's leading compact stream ciphers.
The Quadrature LC Oscillator: A Complete Portrait Based on Injection Locking We show that the quadrature LC oscillator is best treated as two strongly coupled, nominally identical oscillators that are locked to the same frequency. Differential equations that extend Adler&#39;s description of locking to strong injection reveal the full dynamics of this circuit. With a simplifying insight, the analysis reveals all the modes of the oscillator, their stability, the effects of mism...
Architectural Evolution of Integrated M-Phase High-Q Bandpass Filters -phase bandpass filters (BPFs) are analyzed, and variations of the structure are proposed. For values of that are integer multiples of 4, the conventional -phase BPF structure is modified to take complex baseband impedances and frequency-translate their complex impedance response to the local oscillator frequency. Also, it is demonstrated how the -phase BPF can be modified to implement a high quality factor (Q) image-rejection BPF with quadrature RF inputs. In addition, we present high-Q BPFs whose center frequencies are equal to the sum or difference of the RF and IF (intermediate frequency) clocks. Such filters can be useful in heterodyne receiver architectures.
Quadrature Bandpass Sampling Rules for Single- and Multiband Communications and Satellite Navigation Receivers In this paper, we examine how existing rules for bandpass sampling rates can be applied to quadrature bandpass sampling. We find that there are significantly more allowable sampling rates and that the minimum rate can be reduced.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.2
0.2
0.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
Stability Analysis of Positive Polynomial Fuzzy-Model-Based Control Systems With Time Delay Under Imperfect Premise Matching. This paper deals with the stability and positivity analysis of polynomial-fuzzy-model-based (PFMB) control systems with time delay, which is formed by a polynomial fuzzy model and a polynomial fuzzy controller connected in a closed loop, under imperfect premise matching. To improve the design and realization flexibility, the polynomial fuzzy model and the polynomial fuzzy controller are allowed to...
Stability Analysis of Positive Interval Type-2 TSK Systems With Application to Energy Markets Positive systems play an important role in many fields including biology, chemistry, and economics, among others. This paper discusses the stability of interval type-2 discrete-time positive Takagi-Sugeno-Kang (TSK) fuzzy systems. It discusses positive TSK systems and their nonzero equilibrium point. It then provides sufficient conditions for their exponential stability and instability. All the proposed stability and instability conditions can be tested using linear matrix inequalities. The stability and instability tests are demonstrated through application to a TSK model of the electric power market under a variety of market conditions.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
A simple graph theoretic characterization of reachability for positive linear systems In this paper we consider discrete-time linear positive systems, that is systems defined by a pair (A,B) of non-negative matrices. We study the reachability of such systems which in this case amounts to the freedom of steering the state in the positive orthant by using non-negative control sequences. This problem was solved recently [Canonical forms for positive discrete-time linear control systems, Linear Algebra Appl., 310 (2000) 49]. However we derive here necessary and sufficient conditions for reachability in a simpler and more compact form. These conditions are expressed in terms of particular paths in the graph which is naturally associated with the system.
Observer-based Fuzzy Adaptive Inverse Optimal Output Feedback Control for Uncertain Nonlinear Systems In this article, an observer-based fuzzy adaptive inverse optimal output feedback control problem is studied for a class of nonlinear systems in strict-feedback form. The considered nonlinear systems contain unknown nonlinear dynamics and their states are not measured directly. Fuzzy logic systems are applied to identify the unknown nonlinear dynamics and an auxiliary nonlinear system is construct...
Fuzzy Secure Control for Nonlinear <inline-formula><tex-math notation="LaTeX">$N$</tex-math></inline-formula>-D Parabolic PDE-ODE Coupled Systems Under Stochastic Deception Attacks This article focuses on the design of fuzzy secure control for a class of coupled systems, which are modeled by a nonlinear <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$N$</tex-math></inline-formula> -dimensional ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$N$</tex-math></inline-formula> -D) parabolic partial differential equation (PDE) subsystem and an ordinary differential equation (ODE) subsystem. Under stochastic deception attacks, a fuzzy secure control scheme is designed, which is effective to tolerate the attacks and ensure the desired performance for the considered systems. A new fuzzy-dependent Poincare–Wirtinger’s inequality (PWI) is proposed. Compared with the traditional Poincare’s inequality, the fuzzy-dependent PWI is more flexible and less conservative. Meanwhile, an augmented Lyapunov–Krasovskii functional (LKF) is newly constructed, which strengthens the correlations of the PDE subsystem and ODE subsystem. Then, on the ground of the fuzzy-dependent PWI and the augmented LKF, new exponential stabilization criteria are set up for the PDE-ODE coupled systems. Finally, a hypersonic rocket car is presented to verify the effectiveness and less conservatism of the obtained results.
Stability analysis and constrained control of a class of fuzzy positive systems with delays using linear copositive Lyapunov functional This paper deals with the stability of nonlinear continuous-time positive systems with delays represented by the Takagi-Sugeno (T-S) fuzzy model. A simpler sufficient condition of stability based on linear copositive Lyapunov functional (LCLF) is derived which is not relevant to the magnitude of delays. Based on the result of stability, the problem of controller design via the so-called parallel distributed compensation (PDC) scheme is solved. The control is under a positivity constraint, which means that the resulting closed-loop systems are not only stable, but also positive. Constrained positive control is also considered, further requiring that the trajectory of the closed-loop system is bounded by a prescribed boundary if the initial condition is bounded by the same boundary. The stability results are formulated as linear programs (LPs) and linear matrix inequalities (LMIs), and the control laws can be obtained by solving a set of bilinear matrix inequalities (BMIs). A numerical example and a real plant are studied to demonstrate the efficiency of the proposed method. © Springer Science+Business Media, LLC 2012.
Moving Bottlenecks in Car Traffic Flow: A PDE-ODE Coupled Model We study a model of vehicular traffic flow, represented by a coupled system formed by a scalar conservation law, describing the evolution of cars density, and an ODE, whose solution is the position of a moving bottleneck, i.e., a slower vehicle moving inside the cars' flow. A fractional step approach is used to approximate the coupled model, and convergence is proved by compactness arguments. Finally, the limit of such an approximating sequence is proved to solve the original PDE-ODE model.
P-Grid: a self-organizing structured P2P system Abstract: this paper was supported in part bythe National Competence Center in Research on MobileInformation and Communication Systems (NCCR-MICS), acenter supported by the Swiss National Science Foundationunder grant number 5005-67322 and by SNSF grant 2100064994,&quot;Peer-to-Peer Information Systems.&quot;messages. From the responses it (randomly) selectscertain peers to which direct network linksare established
Design of Symmetrical Class E Power Amplifiers for Very Low Harmonic-Content Applications Class E power amplifier circuits are very suitable for high efficiency power amplification applications in the radio-frequency and microwave ranges. However, due to the inherent asymmetrical driving arrangement, they suffer significant harmonic contents in the output voltage and current, and usually require substantial design efforts in achieving the desired load matching networks for applications requiring very low harmonic contents. In this paper, the design of a Class E power amplifier with resonant tank being symmetrically driven by two Class E circuits is studied. The symmetrical Class E circuit, under nominal operating conditions, has extremely low harmonic distortions, and the design of the impedance matching network for harmonic filtering becomes less critical. Practical steady-state design equations for Class E operation are derived and graphically presented. Experimental circuits are constructed for distortion evaluation. It has been found that this circuit offers total harmonic distortions which are about an order of magnitude lower than those of the conventional Class E power amplifier.
Efficient Broadcast in Structured P2P Networks In this position paper, we present an efficient algorithm for performing a broadcast operation with minimal cost in structured DHT-based P2P networks. In a system of N nodes, a broadcast message originating at an arbitrary node reaches all other nodes after exactly N - 1 messages. We emphasize the perception of a class of DHT systems as a form of distributed k-ary search and we take advantage of that perception in constructing a spanning tree that is utilized for efficient broadcasting. We consider broadcasting as a basic service that adds to existing DHTs the ability to search using arbitrary queries as well as dissiminate/collect global information.
Efficient dithering in MASH sigma-delta modulators for fractional frequency synthesizers The digital multistage-noise-shaping (MASH) ΣΔ modulators used in fractional frequency synthesizers are prone to spur tone generation in their output spectrum. In this paper, the state of the art on spur-tone-magnitude reduction is used to demonstrate that an M-bit MASH architecture dithered by a simple M-bit linear feedback shift register (LFSR) can be as effective as more sophisticated topologies if the dither signal is properly added. A comparison between the existent digital ΣΔ modulators used in fractional synthesizers is presented to demonstrate that the MASH architecture has the best tradeoff between complexity and quantization noise shaping, but they present spur tones. The objective of this paper was to significantly decrease the area of the circuit used to reduce the spur tone magnitude for these MASH topologies. The analysis is validated with a theoretical study of the paths where the dither signal can be added. Experimental results of a digital M-bit MASH 1-1-1 ΣΔ modulator with the proposed way to add the LFSR dither are presented to make a hardware comparison.
A 10/30 MHz Fast Reference-Tracking Buck Converter With DDA-Based Type-III Compensator A 10/30 MHz voltage-mode controlled buck converter with a wide duty-cycle range is presented. A high-accuracy delay-compensated ramp generator using only low-speed comparators but can work up to 70 MHz is proposed. By using a differential difference amplifier (DDA), a new Type-III compensator is proposed to reduce the chip area of the compensator by 60%. Moreover, based on the unique structure of the proposed compensator, an end-point prediction (EPP) scheme is also implemented to achieve fast reference-tracking responses. The converter was fabricated in a 0.13 μm standard CMOS process. It achieves wide duty-cycle ranges of 0.75 and 0.59 when switching at 10 MHz and 30 MHz with peak efficiencies of 91.8% and 86.6%, respectively. The measured maximum output power is 3.6 W with 2.4 V output voltage and 1.5 A load current. With a constant load current of 500 mA, the up-tracking speeds for switching frequencies of 10 MHz and 30 MHz are 1.67 μs/V and 0.67 μs/V, respectively. The down-tracking speeds for 10 MHz and 30 MHz are 4.44 μs/V and 1.56 μs/V, respectively.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.11
0.11
0.1
0.1
0.1
0.1
0.035952
0.01
0
0
0
0
0
0
Polynomial Counting in Anonymous Dynamic Networks with Applications to Anonymous Dynamic Algebraic Computations. Starting with Michail, Chatzigiannakis, and Spirakis work [Michail et al., 2013], the problem of Counting the number of nodes in {Anonymous Dynamic Networks} has attracted a lot of attention. The problem is challenging because nodes are indistinguishable (they lack identifiers and execute the same program) and the topology may change arbitrarily from round to round of communication, as long as the network is connected in each round. The problem is central in distributed computing as the number of participants is frequently needed to make important decisions, such as termination, agreement, synchronization, and many others. A variety of algorithms built on top of mass-distribution techniques have been presented, analyzed, and also experimentally evaluated; some of them assumed additional knowledge of network characteristics, such as bounded degree or given upper bound on the network size. However, the question of whether Counting can be solved deterministically in sub-exponential time remained open. In this work, we answer this question positively by presenting Methodical Counting, which runs in polynomial time and requires no knowledge of network characteristics. Moreover, we also show how to extend Methodical Counting to compute the sum of input values and more complex functions without extra cost. Our analysis leverages previous work on random walks in evolving graphs, combined with carefully chosen alarms in the algorithm that control the process and its parameters. To the best of our knowledge, our Counting algorithm and its extensions to other algebraic and Boolean functions are the first that can be implemented in practice with worst-case guarantees.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
DRQ: Dynamic Region-based Quantization for Deep Neural Network Acceleration Quantization is an effective technique for Deep Neural Network (DNN) inference acceleration. However, conventional quantization techniques are either applied at network or layer level that may fail to exploit fine-grained quantization for further speedup, or only applied on kernel weights without paying attention to the feature map dynamics that may lead to lower NN accuracy. In this paper, we propose a dynamic region-based quantization, namely DRQ, which can change the precision of a DNN model dynamically based on the sensitive regions in the feature map to achieve greater acceleration while reserving better NN accuracy. We propose an algorithm to identify the sensitive regions and an architecture that utilizes a variable-speed mixed-precision convolution array to enable the algorithm with better performance and energy efficiency. Our experiments on a wide variety of networks show that compared to a coarse-grained quantization accelerator like “Eyeriss”, DRQ can achieve 92% performance gain and 72% energy reduction with less then 1% accuracy loss. Compared to the state-of-the-art mixed-precision quantization accelerator “OLAccel”, DRQ can also achieve 21% performance gain and 33% energy reduction with 3% prediction accuracy improvement which is quite impressive for inference.
Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training The success of DNN pruning has led to the development of energy-efficient inference accelerators that support pruned models with sparse weight and activation tensors. Because the memory layouts and dataflows in these architectures are optimized for the access patterns during inference, however, they do not efficiently support the emerging sparse training techniques. In this paper, we demonstrate (a) that accelerating sparse training requires a co-design approach where algorithms are adapted to suit the constraints of hardware, and (b) that hardware for sparse DNN training must tackle constraints that do not arise in inference accelerators. As proof of concept, we adapt a sparse training algorithm to be amenable to hardware acceleration; we then develop dataflow, data layout, and load-balancing techniques to accelerate it. The resulting system is a sparse DNN training accelerator that produces pruned models with the same accuracy as dense models without first training, then pruning, and finally retraining, a dense model. Compared to training the equivalent unpruned models using a state-of-the-art DNN accelerator without sparse training support, Procrustes consumes up to 3.26× less energy and offers up to 4× speedup across a range of models, while pruning weights by an order of magnitude and maintaining unpruned accuracy.
Mixtures of Lightweight Deep Convolutional Neural Networks: Applied to Agricultural Robotics. We propose a novel approach for training deep convolutional neural networks (DCNNs) that allows us to tradeoff complexity and accuracy to learn lightweight models suitable for robotic platforms such as AgBot II (which performs automated weed management). Our approach consists of three stages, the first is to adapt a pre-trained model to the task at hand. This provides state-of-the-art performance ...
Deep Neural Network Compression by In-Parallel Pruning-Quantization. Deep neural networks enable state-of-the-art accuracy on visual recognition tasks such as image classification and object detection. However, modern networks contain millions of learned connections, and the current trend is towards deeper and more densely connected architectures. This poses a challenge to the deployment of state-of-the-art networks on resource-constrained systems, such as smartpho...
MatRaptor: A Sparse-Sparse Matrix Multiplication Accelerator Based on Row-Wise Product Sparse-sparse matrix multiplication (SpGEMM) is a computation kernel widely used in numerous application domains such as data analytics, graph processing, and scientific computing. In this work we propose MatRaptor, a novel SpGEMM accelerator that is high performance and highly resource efficient. Unlike conventional methods using inner or outer product as the meta operation for matrix multiplication, our approach is based on row-wise product, which offers a better tradeoff in terms of data reuse and on-chip memory requirements, and achieves higher performance for large sparse matrices. We further propose a new hardware-friendly sparse storage format, which allows parallel compute engines to access the sparse data in a vectorized and streaming fashion, leading to high utilization of memory bandwidth. We prototype and simulate our accelerator architecture using gem5 on a diverse set of matrices. Our experiments show that MatRaptor achieves 129.2× speedup over single-threaded CPU, 8.8× speedup over GPU and 1.8× speedup over the state-of-the-art SpGEMM accelerator (OuterSPACE). MatRaptor also has 7.2× lower power consumption and 31.3× smaller area compared to OuterSPACE.
SNAP: An Efficient Sparse Neural Acceleration Processor for Unstructured Sparse Deep Neural Network Inference Recent developments in deep neural network (DNN) pruning introduces data sparsity to enable deep learning applications to run more efficiently on resourceand energy-constrained hardware platforms. However, these sparse models require specialized hardware structures to exploit the sparsity for storage, latency, and efficiency improvements to the full extent. In this work, we present the sparse neural acceleration processor (SNAP) to exploit unstructured sparsity in DNNs. SNAP uses parallel associative search to discover valid weight (W) and input activation (IA) pairs from compressed, unstructured, sparse W and IA data arrays. The associative search allows SNAP to maintain a 75% average compute utilization. SNAP follows a channel-first dataflow and uses a two-level partial sum (psum) reduction dataflow to eliminate access contention at the output buffer and cut the psum writeback traffic by 22× compared with state-of-the-art DNN accelerator designs. SNAP's psum reduction dataflow can be configured in two modes to support general convolution (CONV) layers, pointwise CONV, and fully connected layers. A prototype SNAP chip is implemented in a 16-nm CMOS technology. The 2.3-mm2 test chip is measured to achieve a peak effectual efficiency of 21.55 TOPS/W (16 b) at 0.55 V and 260 MHz for CONV layers with 10% weight and activation densities. Operating on a pruned ResNet-50 network, the test chip achieves a peak throughput of 90.98 frames/s at 0.80 V and 480 MHz, dissipating 348 mW.
Ramulator: A Fast and Extensible DRAM Simulator Recently, both industry and academia have proposed many different roadmaps for the future of DRAM. Consequently, there is a growing need for an extensible DRAM simulator, which can be easily modified to judge the merits of today’s DRAM standards as well as those of tomorrow. In this paper, we present Ramulator, a fast and cycle-accurate DRAM simulator that is built from the ground up for extensibility. Unlike existing simulators, Ramulator is based on a generalized template for modeling a DRAM system, which is only later infused with the specific details of a DRAM standard. Thanks to such a decoupled and modular design, Ramulator is able to provide out-of-the-box support for a wide array of DRAM standards: DDR3/4, LPDDR3/4, GDDR5, WIO1/2, HBM, as well as some academic proposals (SALP, AL-DRAM, TLDRAM, RowClone, and SARP). Importantly, Ramulator does not sacrifice simulation speed to gain extensibility: according to our evaluations, Ramulator is 2.5 faster than the next fastest simulator. Ramulator is released under the permissive BSD license.
The GPU Computing Era GPU computing is at a tipping point, becoming more widely used in demanding consumer applications and high-performance computing. This article describes the rapid evolution of GPU architectures—from graphics processors to massively parallel many-core multiprocessors, recent developments in GPU computing architectures, and how the enthusiastic adoption of CPU+GPU coprocessing is accelerating parallel applications.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
A Single–Chip 10-Band WCDMA/HSDPA 4-Band GSM/EDGE SAW-less CMOS Receiver With DigRF 3G Interface and ${+}$ 90 dBm IIP2 This paper describes the design and performance of a 90 nm CMOS SAW-less receiver with DigRF interface that supports 10 WCDMA bands (I, II, III, IV, V, VI, VIII, IX, X, XI) and 4 GSM bands (GSM850, EGSM900, DCS1800, PCS1900). The receiver is part of a single-chip SAW-less transceiver reference platform IC for mass-market smartphones, which has been designed to meet Category 10 HSDPA (High Speed Do...
The Interdomain Connectivity of PlanetLab Nodes In this paper we investigate the interdomain connectivity of PlanetLab nodes. We note that about 85 percent of the hosts are located within what we call the Global Research and Educational Network (GREN) - an interconnected network of high speed research networks such as Internet2 in the USA and Dante in Europe. Since traffic with source and destination on the GREN is very likely to be transited solely by the GREN, this means that over 70 percent of the end-to-end measurements between PlanetLab node pairs represent measurements of GREN characteristics. We suggest that it may be possible to systematically choose the placement of new nodes so that as the PlanetLab platform grows it becomes a closer and closer approximation to the Global Internet.
22.7-dB Gain <formula formulatype="inline"><tex Notation="TeX">$-$</tex></formula>19.7-dBm <formula formulatype="inline"><tex Notation="TeX">$ICP_{1{\rm dB}}$</tex></formula> UWB CMOS LNA A fully differential CMOS ultrawideband low-noise amplifier (LNA) is presented. The LNA has been realized in a standard 90-nm CMOS technology and consists of a common-gate stage and two subsequent common-source stages. The common-gate input stage realizes a wideband input impedance matching to the source impedance of the receiver (i.e., the antenna), whereas the two subsequent common-source stages...
A Hybrid Threshold Self-Compensation Rectifier For Rf Energy Harvesting This paper presents a novel highly efficient 5-stage RF rectifier in SMIC 65 nm standard CMOS process. To improve power conversion efficiency (PCE) and reduce the minimum input voltage, a hybrid threshold self-compensation approach is applied in this proposed RF rectifier, which combines the gate-bias threshold compensation with the body-effect compensation. The proposed circuit uses PMOSFET in all the stages except for the first stage to allow individual body-bias, which eliminates the need for triple-well technology. The presented RF rectifier exhibits a simulated maximum PCE of 30% at -16.7dBm (20.25 mu W) and produces 1.74V across 0.5 M Omega load resistance. In the circumstances of 1 M Omega load resistance, it outputs 1.5 V DC voltage from a remarkably low input power level of -20.4 dBm (9 mu W) RF input power with PCE of about 25%.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.102
0.104
0.1
0.1
0.052
0.033333
0.006954
0.000143
0
0
0
0
0
0
Linearized Multi-Level Delta Sigma Modulated Wireless Transmitters For Sdr Applications Using Simple Dlga Algorithm This paper proposes a new linearization algorithm, discrete level gain adjustment (DLGA), for linearized high efficiency multi-level delta sigma modulator(Delta Sigma M) -based transmitter architectures adequate for wideband multi-standard software defined radio (SDR) applications. The new simple linearization DLGA algorithm is deployed instead of using a full digitally predistorted to maintain the linearity of the employed switching-mode power amplifier (SMPA) with a considerable decrease in the complexity of the digital signal processing (DSP) unit. The proposed architecture includes a multi-level envelope Delta Sigma M(E Delta Sigma M) concurrently with a linearized SMPA, in order to achieve a better trade-off of power efficiency versus linearity. Based on DLGA, instead of envelope elimination and restoration (EER) configuration, three-level envelope LP Delta Sigma M-based transmitter in phase elimination and restoration (PER) configuration was implemented. The bandwidth constraint of the EER configuration was relaxed. First, a multi-level Envelope E Delta Sigma M-based transmitter was studied to determine the optimal number of quantizer levels that could be used. Through MATLAB simulation and measurement results, it was shown that the best performance was achieved with a discrete level signal that has three different power levels, including zero and regardless the phase. From the measurements, the linearized three-level PER-LPE Delta Sigma M transmitter shows an efficiency of 36%, signal-to-noise distortion ratio of 43.8 dB and adjacent channel power ratio of 45 dB.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Multi-Agent Based Transactive Energy Management Systems for Residential Buildings with Distributed Energy Resources Proper management of building loads and distributed energy resources (DER) can offer grid assistance services in transactive energy (TE) frameworks besides providing cost savings for the consumer. However, most TE models require building loads and DER units to be managed by external entities (e.g., aggregators), and in some cases, consumers need to provide critical information related to their ele...
A Secure Distributed Transactive Energy Management Scheme for Multiple Interconnected Microgrids Considering Misbehaviors This paper develops a secure distributed transactive energy management (S-DTEM) scheme for multiple interconnected microgrids (MGs). Within the scheme, each MG is managed by a distributed MG energy management system (MG-EMS) which only exchanges information of trading quantities and prices with other MGs to preserve information-privacy. When each MG behaves as a price taker, its S-DTEM dynamically...
Estimation of entropy and mutual information We present some new results on the nonparametric estimation of entropy and mutual information. First, we use an exact local expansion of the entropy function to prove almost sure consistency and central limit theorems for three of the most commonly used discretized information estimators. The setup is related to Grenander's method of sieves and places no assumptions on the underlying probability measure generating the data. Second, we prove a converse to these consistency theorems, demonstrating that a misapplication of the most common estimation techniques leads to an arbitrarily poor estimate of the true information, even given unlimited data. This "inconsistency" theorem leads to an analytical approximation of the bias, valid in surprisingly small sample regimes and more accurate than the usual 1/N formula of Miller and Madow over a large region of parameter space. The two most practical implications of these results are negative: (1) information estimates in a certain data regime are likely contaminated by bias, even if "bias-corrected" estimators are used, and (2) confidence intervals calculated by standard techniques drastically underestimate the error of the most common estimation methods.Finally, we note a very useful connection between the bias of entropy estimators and a certain polynomial approximation problem. By casting bias calculation problems in this approximation theory framework, we obtain the best possible generalization of known asymptotic bias results. More interesting, this framework leads to an estimator with some nice properties: the estimator comes equipped with rigorous bounds on the maximum error over all possible underlying probability distributions, and this maximum error turns out to be surprisingly small. We demonstrate the application of this new estimator on both real and simulated data.
Demand Side Management: Demand Response, Intelligent Energy Systems, and Smart Loads Energy management means to optimize one of the most complex and important technical creations that we know: the energy system. While there is plenty of experience in optimizing energy generation and distribution, it is the demand side that receives increasing attention by research and industry. Demand Side Management (DSM) is a portfolio of measures to improve the energy system at the side of consumption. It ranges from improving energy efficiency by using better materials, over smart energy tariffs with incentives for certain consumption patterns, up to sophisticated real-time control of distributed energy resources. This paper gives an overview and a taxonomy for DSM, analyzes the various types of DSM, and gives an outlook on the latest demonstration projects in this domain.
Data-Driven Pricing Strategy for Demand-Side Resource Aggregators We consider a utility who seeks to coordinate the energy consumption of multiple demand-side flexible resource aggregators. For the purpose of privacy protection, the utility has no access to the detailed information of loads of resource aggregators. Instead, we assume that the utility can directly observe each aggregator’s aggregate energy consumption outcomes. Furthermore, the utility can leverage resource aggregator energy consumption via time-varying electricity price profiles. Based on inverse optimization technique, we propose an estimation method for the utility to infer the energy requirement information of aggregators. Subsequently, we design a data-driven pricing scheme to help the utility achieve system-level control objectives (e.g., minimizing peak demand) by combining hybrid particle swarm optimizer with mutation (HPSOM) algorithm and an iterative algorithm. Case studies have demonstrated the effectiveness of the proposed approach against two benchmark pricing strategies – a flat-rate scheme and a time-of-use (TOU) scheme.
An Energy Management System for Isolated Microgrids with Thermal Energy Resources A novel Energy Management System (EMS) model for an isolated microgrid, integrating thermal energy resources, such as Combined Heat and Power (CHP) units, boilers, Heat Pumps (HPs), and Thermal Storage System (TSS), while considering thermal load models, is proposed in this paper. The developed EMS is tested and validated with a real testbed microgrid located in Bari, Italy, which supplies both el...
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Time-delay systems: an overview of some recent advances and open problems After presenting some motivations for the study of time-delay system, this paper recalls modifications (models, stability, structure) arising from the presence of the delay phenomenon. A brief overview of some control approaches is then provided, the sliding mode and time-delay controls in particular. Lastly, some open problems are discussed: the constructive use of the delayed inputs, the digital implementation of distributed delays, the control via the delay, and the handling of information related to the delay value.
Bayesian Network Classifiers Recent work in supervised learning has shown that a surprisinglysimple Bayesian classifier with strong assumptions of independence amongfeatures, called naive Bayes, is competitive withstate-of-the-art classifiers such as C4.5. This fact raises the question ofwhether a classifier with less restrictive assumptions can perform evenbetter. In this paper we evaluate approaches for inducing classifiers fromdata, based on the theory of learning Bayesian networks. These networks are factored representations ofprobability distributions that generalize the naive Bayesian classifier andexplicitly represent statements about independence. Among these approacheswe single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same timemaintains the computational simplicity (no search involved) and robustnessthat characterize naive Bayes. We experimentally tested these approaches,using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for featureselection.
Distributed computation in dynamic networks In this paper we investigate distributed computation in dynamic networks in which the network topology changes from round to round. We consider a worst-case model in which the communication links for each round are chosen by an adversary, and nodes do not know who their neighbors for the current round are before they broadcast their messages. The model captures mobile networks and wireless networks, in which mobility and interference render communication unpredictable. In contrast to much of the existing work on dynamic networks, we do not assume that the network eventually stops changing; we require correctness and termination even in networks that change continually. We introduce a stability property called T -interval connectivity (for T = 1), which stipulates that for every T consecutive rounds there exists a stable connected spanning subgraph. For T = 1 this means that the graph is connected in every round, but changes arbitrarily between rounds. We show that in 1-interval connected graphs it is possible for nodes to determine the size of the network and compute any com- putable function of their initial inputs in O(n2) rounds using messages of size O(log n + d), where d is the size of the input to a single node. Further, if the graph is T-interval connected for T 1, the computation can be sped up by a factor of T, and any function can be computed in O(n + n2/T) rounds using messages of size O(log n + d). We also give two lower bounds on the token dissemination problem, which requires the nodes to disseminate k pieces of information to all the nodes in the network. The T-interval connected dynamic graph model is a novel model, which we believe opens new avenues for research in the theory of distributed computing in wireless, mobile and dynamic networks.
Master Data Quality Barriers: An Empirical Investigation Purpose - The development of IT has enabled organizations to collect and store many times more data than they were able to just decades ago. This means that companies are now faced with managing huge amounts of data, which represents new challenges in ensuring high data quality. The purpose of this paper is to identify barriers to obtaining high master data quality.Design/methodology/approach - This paper defines relevant master data quality barriers and investigates their mutual importance through organizing data quality barriers identified in literature into a framework for analysis of data quality. The importance of the different classes of data quality barriers is investigated by a large questionnaire study, including answers from 787 Danish manufacturing companies.Findings - Based on a literature review, the paper identifies 12 master data quality barriers. The relevance and completeness of this classification is investigated by a large questionnaire study, which also clarifies the mutual importance of the defined barriers and the differences in importance in small, medium, and large companies.Research limitations/implications - The defined classification of data quality barriers provides a point of departure for future research by pointing to relevant areas for investigation of data quality problems. The limitations of the study are that it focuses only on manufacturing companies and master data (i.e. not transaction data).Practical implications - The classification of data quality barriers can give companies increased awareness of why they experience data quality problems. In addition, the paper suggests giving primary focus to organizational issues rather than perceiving poor data quality as an IT problem.Originality/value - Compared to extant classifications of data quality barriers, the contribution of this paper represents a more detailed and complete picture of what the barriers are in relation to data quality. Furthermore, the presented classification has been investigated by a large questionnaire study, for which reason it is founded on a more solid empirical basis than existing classifications.
A 41-phase switched-capacitor power converter with 3.8mV output ripple and 81% efficiency in baseline 90nm CMOS.
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.066667
0.066667
0.066667
0.066667
0.066667
0.066667
0
0
0
0
0
0
0
0
A 2GHz voltage mode power scalable RF-Front-End with 2.5dB-NF and 0.5dBm-1dBCP The removal of the surface acoustic wave (SAW) filter in front of the receiver, in favour of less expensive and less filtering solutions, demands large input signals to be handled (sometimes above OdBm) without degrading the noise floor. Such requirements lead to an increment of power consumption in both the signal and the local oscillator (LO) paths. In the former, larger input compression points (CP) are typically obtained by limiting the voltage gain at RF with the use of current-passive mixer architectures followed by power hungry trans-impedance amplifiers (TIA) [1]–[5]. In the LO path, large input signals demand power hungry buffers (with phase noise (PN) even below -170dBc/Hz) to deal with reciprocal mixing phenomena [6].
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Towards Higher Performance and Robust Compilation for CGRA Modulo Scheduling Coarse-Grained Reconfigurable Architectures (CGRA) is a promising solution for accelerating computation intensive tasks due to its good trade-off in energy efficiency and flexibility. One of the challenging research topic is how to effectively deploy loops onto CGRAs within acceptable compilation time. Modulo scheduling (MS) has shown to be efficient on deploying loops onto CGRAs. Existing CGRA MS...
Compiler algorithms for synchronization Translating program loops into a parallel form is one of the most important transformations performed by concurrentizing compilers. This transformation often requires the insertion of synchronization instructions within the body of the concurrent loop. Several loop synchronization techniques are presented first. Compiler algorithms to generate synchronization instructions for singly-nested loops are then discussed. Finally, a technique for the elimination of redundant synchronization instructions is presented.
A Software Scheme for Multithreading on CGRAs Recent industry trends show a drastic rise in the use of hand-held embedded devices, from everyday applications to medical (e.g., monitoring devices) and critical defense applications (e.g., sensor nodes). The two key requirements in the design of such devices are their processing capabilities and battery life. There is therefore an urgency to build high-performance and power-efficient embedded devices, inspiring researchers to develop novel system designs for the same. The use of a coprocessor (application-specific hardware) to offload power-hungry computations is gaining favor among system designers to suit their power budgets. We propose the use of CGRAs (Coarse-Grained Reconfigurable Arrays) as a power-efficient coprocessor. Though CGRAs have been widely used for streaming applications, the extensive compiler support required limits its applicability and use as a general purpose coprocessor. In addition, a CGRA structure can efficiently execute only one statically scheduled kernel at a time, which is a serious limitation when used as an accelerator to a multithreaded or multitasking processor. In this work, we envision a multithreaded CGRA where multiple schedules (or kernels) can be executed simultaneously on the CGRA (as a coprocessor). We propose a comprehensive software scheme that transforms the traditionally single-threaded CGRA into a multithreaded coprocessor to be used as a power-efficient accelerator for multithreaded embedded processors. Our software scheme includes (1) a compiler framework that integrates with existing CGRA mapping techniques to prepare kernels for execution on the multithreaded CGRA and (2) a runtime mechanism that dynamically schedules multiple kernels (offloaded from the processor) to execute simultaneously on the CGRA coprocessor. Our multithreaded CGRA coprocessor implementation thus makes it possible to achieve improved power-efficient computing in modern multithreaded embedded systems.
Domain Specialization Is Generally Unnecessary for Accelerators. Domain-specific accelerators (DSAs), which sacrifice programmability for efficiency, are a reaction to the waning benefits of device scaling. This article demonstrates that there are commonalities between DSAs that can be exploited with programmable mechanisms. The goals are to create a programmable architecture that can match the benefits of a DSA and to create a platform for future accelerator i...
PathSeeker: A Fast Mapping Algorithm for CGRAs Coarse-grained reconfigurable arrays (CGRAs) have gained traction over the years as a low-power accelerator due to the efficient mapping of the compute-intensive loops onto the 2-D array by the CGRA compiler. When encountering a mapping failure for a given node, existing mapping techniques either exit and retry the mapping anew, or perform backtracking, i.e., recursively remove the previously mapped node to find a valid mapping. Abandoning mapping and starting afresh can deteriorate the quality of mapping and the compilation time. Even backtracking may not be the best choice since the previous node may not be the incorrectly placed node. To tackle this issue, we propose PathSeeker - a mapping approach that analyzes mapping failures and performs local adjustments to the schedule to obtain a mapping. Experimental results on 35 top performance-critical loops from MiBench, Rodinia, and Parboil benchmark suites demonstrate that PathSeeker can map all of them with better mapping quality and dramatically less compilation time than the previous state-of-the-art approaches - GraphMinor and RAMP, which were unable to map 20 and 5 loops, respectively. Over these benchmarks, PathSeeker achieves 28% better performance at 550x compilation speedup over GraphMinor and 3% better performance at 10x compilation speedup over RAMP on a 4x4 CGRA.
OpenCGRA: An Open-Source Unified Framework for Modeling, Testing, and Evaluating CGRAs Coarse-grained reconfigurable arrays (CGRAs), loosely defined as arrays of functional units (e.g., adder, subtractor, multiplier, divider, or larger multi-operation units, but smaller than a general-purpose core) interconnected through a Network-on-Chip, provide higher flexibility than domain-specific ASIC accelerators while offering increased hardware efficiency with respect to fine-grained reconfigurable devices, such as Field Programmable Gate Arrays (FPGAs). The fast evolving fields of machine learning and edge computing, which are seeing a continuous flow of novel algorithms and larger models, make CGRAs ideal architectures to allow domain specialization without losing too much generality. Designing and generating a CGRA, however, still requires to define the type and number of the specific functional units, implement their interconnect and the network topology, and perform the simulation and validation, given a variety of workloads of interest. In this paper, we propose OpenC-GRA *, the first open-source integrated framework that is able to support the full top-to-bottom design flow for specializing and implementing CGRAs: modeling at different abstraction levels (functional level, cycle level, register-transfer level) with compiler support, verification at different granularities (unit testing, integration testing, property-based testing), simulation, generation of synthesizable Verilog, and characterization (area, power, and timing). By using OpenCGRA, it only takes a few hours to build a specialized power- and area-efficient CGRA throughout the entire design flow given a set of applications of interest. OpenCGRA is available online at https://github.com/pnnl/OpenCGRA.
A Fully Pipelined and Dynamically Composable Architecture of CGRA. Future processor chips will not be limited by the transistor resources, but will be mainly constrained by energy efficiency. Reconfigurable fabrics bring higher energy efficiency than CPUs via customized hardware that adapts to user applications. Among different reconfigurable fabrics, coarse-grained reconfigurable arrays (CGRAs) can be even more efficient than fine-grained FPGAs when bit-level customization is not necessary in target applications. CGRAs were originally developed in the era when transistor resources were more critical than energy efficiency. Previous work shares hardware among different operations via modulo scheduling and time multiplexing of processing elements. In this work, we focus on an emerging scenario where transistor resources are rich. We develop a novel CGRA architecture that enables full pipelining and dynamic composition to improve energy efficiency by taking full advantage of abundant transistors. Several new design challenges are solved. We implement a prototype of the proposed architecture in a commodity FPGA chip for verification. Experiments show that our architecture can fully exploit the energy benefits of customization for user applications in the scenario of rich transistor resources.
Randomized algorithms This text by two well-known experts in the field presents the basic concepts in the design and analysis of randomized algorithms at a level accessible to beginning graduate students, professionals and researchers.
The gem5 simulator The gem5 simulation infrastructure is the merger of the best aspects of the M5 [4] and GEMS [9] simulators. M5 provides a highly configurable simulation framework, multiple ISAs, and diverse CPU models. GEMS complements these features with a detailed and exible memory system, including support for multiple cache coherence protocols and interconnect models. Currently, gem5 supports most commercial ISAs (ARM, ALPHA, MIPS, Power, SPARC, and x86), including booting Linux on three of them (ARM, ALPHA, and x86). The project is the result of the combined efforts of many academic and industrial institutions, including AMD, ARM, HP, MIPS, Princeton, MIT, and the Universities of Michigan, Texas, and Wisconsin. Over the past ten years, M5 and GEMS have been used in hundreds of publications and have been downloaded tens of thousands of times. The high level of collaboration on the gem5 project, combined with the previous success of the component parts and a liberal BSD-like license, make gem5 a valuable full-system simulation tool.
PRESENT: An Ultra-Lightweight Block Cipher With the establishment of the AES the need for new block ciphers has been greatly diminished; for almost all block cipher applications the AES is an excellent and preferred choice. However, despite recent implementation advances, the AES is not suitable for extremely constrained environments such as RFID tags and sensor networks. In this paper we describe an ultra-lightweight block cipher, present. Both security and hardware efficiency have been equally important during the design of the cipher and at 1570 GE, the hardware requirements for presentare competitive with today's leading compact stream ciphers.
The Quadrature LC Oscillator: A Complete Portrait Based on Injection Locking We show that the quadrature LC oscillator is best treated as two strongly coupled, nominally identical oscillators that are locked to the same frequency. Differential equations that extend Adler&#39;s description of locking to strong injection reveal the full dynamics of this circuit. With a simplifying insight, the analysis reveals all the modes of the oscillator, their stability, the effects of mism...
Architectural Evolution of Integrated M-Phase High-Q Bandpass Filters -phase bandpass filters (BPFs) are analyzed, and variations of the structure are proposed. For values of that are integer multiples of 4, the conventional -phase BPF structure is modified to take complex baseband impedances and frequency-translate their complex impedance response to the local oscillator frequency. Also, it is demonstrated how the -phase BPF can be modified to implement a high quality factor (Q) image-rejection BPF with quadrature RF inputs. In addition, we present high-Q BPFs whose center frequencies are equal to the sum or difference of the RF and IF (intermediate frequency) clocks. Such filters can be useful in heterodyne receiver architectures.
Quadrature Bandpass Sampling Rules for Single- and Multiband Communications and Satellite Navigation Receivers In this paper, we examine how existing rules for bandpass sampling rates can be applied to quadrature bandpass sampling. We find that there are significantly more allowable sampling rates and that the minimum rate can be reduced.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.2
0.2
0.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
Modeling and analyzing mobile ad hoc networks in Real-Time Maude. Modeling and analyzing mobile ad hoc networks (MANETs) pose non-trivial challenges to formal methods. Time, geometry, communication delays and failures, mobility, and uni- and bidirectional wireless communication can interact in unforeseen ways that are hard to model and analyze by current process calculi and automatic formal methods. As a consequence, current analyses tend to abstract away these physical aspects, so that—although still quite useful in finding various errors—their simplifying assumptions can easily fail to model details of MANET behavior relevant to meet desired requirements. In this work we present a formal framework for the modeling and analysis of MANETS based on Real-Time Maude to address this challenge. Specifically, we show that our framework has good expressive power to model relevant aspects of MANETs, and good compositionality properties, so that a MANET protocol can be easily composed with various models of mobility and with other MANET protocols. We illustrate the use of our framework on two well-known MANET benchmarks: the AODV routing protocol and the leader election protocol of Vasudevan, Kurose, and Towsley. Our formal analysis has uncovered a spurious behavior in the latter protocol that is due to the subtle interplay between communication delays, node movement, and neighbor discovery. This behavior therefore cannot be found by analyses that abstract from node movement and communication delays.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An All-Digital 12 pJ/Pulse IR-UWB Transmitter Synthesized From a Standard Cell Library. This paper presents an all-digital impulse radio ultra-wideband (IR-UWB) transmitter. All functional blocks in the transmitter are implemented with digital standard cells and automatically place-and-routed by design tools. The center frequency and the bandwidth of the UWB pulses are digitally tuned to compensate for variations, or target different applications. This paper also proposes a calibrati...
A low voltage auto-reconfigured power-on-reset/bandgap reference circuit In this paper, a low voltage auto-reconfigured power-on-reset/bandgap reference circuit is proposed. During initial power up, the circuit utilizes a transconuductor and a bandgap core to detect the power-up supply voltage level (VTPU) precisely with minimum temperature and process dependence. No precise reference voltage is required. After a power-up signal is issued, the circuit is reconfigured into a bandgap circuit with a precise reference output voltage (Vref) for the use of other circuits. Based on a 0.18μm CMOS process, a VTPU of 999.5mV with variations within +0.18% and -0.15% for different process corners and temperatures (-20°C to +120°C) was obtained. After power up, a Vref of 599.4mV with variations within +0.15% and -0.46% was achieved.
Short-Range Low-Data-Rate FM-UWB Transceivers: Overview, Analysis, and Design. This paper summarizes and compares various circuit configurations of sub-modules and RF front-ends for frequency modulated ultra-wideband (FM-UWB), and analyzes the transceiver design parameters and link margin. High-robust relaxation oscillator for subcarrier generation, low-power ring oscillators for RF FM, automatic frequency calibration (AFC) for system robustness, and preamplifier-latch based...
A High-Precision Resistor-Less CMOS Compensated Bandgap Reference Based on Successive Voltage-Step Compensation. A curvature-compensated resistor-less bandgap reference (BGR), which is fabricated in 0.5-μm CMOS process, is proposed in this paper. The BGR utilizes successive voltagestep compensation to produce a temperature-insensitive voltage reference (VR), including one AVGS step for first-order compensation and another one for higher order curvature correction. Moreover, a supply noise bypassing technique...
A Differential Digitally Controlled Crystal Oscillator With a 14-Bit Tuning Resolution and Sine Wave Outputs for Cellular Applications. This paper describes the design topologies and considerations of a differential sinusoidal-output digitally controlled crystal oscillator (DCXO) intended for use in cellular applications. The oscillator has a fine-tuning range of ±44 ppm, approximately 14 bits of resolution, and an average step size of 0.005 ppm. All signals connecting externally to I/O pins are sine waves for reducing noise, inte...
Analysis of timing jitter in CMOS ring oscillators in this paper the effects of thermal noise in transistors on timing jitter in CMOS ring-oscillators composed of source-coupled differential resistively-loaded delay cells is investigated. The relationship between delay element design parameters and the inherent thermal noise-induced jitter of the generated waveform are analyzed. These results are compared with simulated results from a Monte-Carlo analysis with good agreement. The analysis shows that timing jitter is inversely proportional to the square root of the total capacitance at the output of each inverter, and inversely proportional to the gate-source bias voltage above threshold of the source-coupled devices in the balanced state. Furthermore, these dependencies imply an inverse relationship between jitter and power consumption for an oscillator with fixed output period. Phase noise and timing jitter performance are predicted to improve at a rate of 10 dB per decade increase in power consumption
Hybrid Forward and Backward Threshold-Compensated RF-DC Power Converter for RF Energy Harvesting This paper presents a hybrid forward and backward threshold voltage compensated radio-frequency to direct current (RF-to-DC) power conversion circuit for RF energy harvesting applications. The proposed circuit uses standard p-channel metal-oxide semiconductor transistors in all the stages except for the first few stages to allow individual body biasing eliminating the need for triple-well technology in the previously reported forward compensation schemes. Two different RF-DC power conversion circuits, one optimized to provide high power conversion efficiency (PCE) and the other to produce a large output DC voltage harvested from extremely low input power levels, are designed and fabricated in IBM's 0.13 μm complementary metal-oxide-semiconductor technology. The first circuit exhibits a measured maximum PCE of 22.6% at -16.8 dBm (20.9 μW) and produces 1 V across a 1 MΩ load from a remarkably low input power level of -21.6 dBm (6.9 μW) while the latter circuit produces 2.8 V across a 1 MΩ load from a peak-to-peak input voltage of 170 mV achieving a voltage multiplication ratio of 17. Also, design strategies are developed to enhance the output DC voltage and to optimize the PCE of threshold voltage compensated voltage multiplier.
A study of phase noise in CMOS oscillators This paper presents a study of phase noise in two inductorless CMOS oscillators. First-order analysis of a linear oscillatory system leads to a noise shaping function and a new definition of . A linear model of CMOS ring oscillators is used to calculate their phase noise, and three phase noise phenomena, namely, additive noise, high-frequency multiplicative noise, and low-frequency multiplicative noise, are identified and formulated. Based on the same concepts, a CMOS relaxation oscillator is also analyzed. Issues and techniques related to simulation of noise in the time domain are described, and two prototypes fabricated in a 0.5- m CMOS technology are used to investigate the accuracy of the theoretical predictions. Compared with the measured results, the calculated phase noise values of a 2-GHz ring oscillator and a 900-MHz relaxation oscillator at 5 MHz offset have an error of approximately 4 dB. OLTAGE-CONTROLLED oscillators (VCO's) are an integral part of phase-locked loops, clock recovery cir- cuits, and frequency synthesizers. Random fluctuations in the output frequency of VCO's, expressed in terms of jitter and phase noise, have a direct impact on the timing accuracy where phase alignment is required and on the signal-to-noise ratio where frequency translation is performed. In particular, RF oscillators employed in wireless tranceivers must meet stringent phase noise requirements, typically mandating the use of passive LC tanks with a high quality factor . However, the trend toward large-scale integration and low cost makes it desirable to implement oscillators monolithically. The paucity of literature on noise in such oscillators together with a lack of experimental verification of underlying theories has motivated this work. This paper provides a study of phase noise in two induc- torless CMOS VCO's. Following a first-order analysis of a linear oscillatory system and introducing a new definition of , we employ a linearized model of ring oscillators to obtain an estimate of their noise behavior. We also describe the limitations of the model, identify three mechanisms leading to phase noise, and use the same concepts to analyze a CMOS relaxation oscillator. In contrast to previous studies where time-domain jitter has been investigated (1), (2), our analysis is performed in the frequency domain to directly determine the phase noise. Experimental results obtained from a 2-GHz ring oscillator and a 900-MHz relaxation oscillator indicate that, despite many simplifying approximations, lack of accurate MOS models for RF operation, and the use of simple noise
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
REDMAN: An optimistic replication middleware for read-only resources in dense MANETs The spread of wireless portable devices is pushing towards service provisioning over dense Mobile Ad hoc NETworks (MANETs), i.e., limited spatial regions, such as shopping malls and airports, where a high number of mobile peers can autonomously cooperate without a statically deployed network infrastructure. The paper proposes the REDMAN middleware to manage, retrieve, and disseminate replicas of data/service components to cooperating nodes in a dense MANET. The guideline is to exploit high node population to enable optimistic lightweight resource replication capable of tolerating node exits/failures. REDMAN adopts original approximated solutions, specifically designed for dense MANET, that have demonstrated good scalability and limited overhead for dense MANET configuration (node identification and manager election), for replica distribution/retrieval, and for lazily consistent replica degree maintenance.
A Low Complexity Reconfigurable Non-uniform Filter Bank for Channelization in Multi-standard Wireless Communication Receivers In a typical multi-standard wireless communication receiver, the channelizer must have the capability of extracting multiple channels (frequency bands) of distinct bandwidths corresponding to different communication standards. The channelizer operates at the highest sampling rate in the digital front end of receiver and hence power efficient low complex architecture is required for cost-effective implementation of channelizer. Reconfigurability is another key requirement in the channelizer to support different communication standards. In this paper, we propose a low complexity reconfigurable filter bank (FB) channelizer based on coefficient decimation, interpolation and frequency masking techniques. The proposed FB architecture is capable of extracting channels of distinct (non-uniform) bandwidths from the wideband input signal. Design example shows that the proposed FB offers multiplier complexity reduction of 83% over Per-Channel (PC) approach and 60% over Modulated Perfect Reconstruction FB. The proposed FB when designed as a uniform FB (subbands of equal bandwidths), offers a complexity reduction of 20% over Discrete Fourier Transform FB (DFTFB) and 57% over Goertzel Filter Bank. Furthermore, the proposed FB has an added advantage of dynamic reconfigurability over these FBs. The proposed FB is implemented on Xilinx Virtex 2v3000ff1152-4 FPGA with 16 bit precision. The PC approach and DFTFB are also implemented on the same FPGA with 14 bit precision. The implementation results shows an average slice reduction of 29.14% and power reduction of 46.84% over PC approach, 14.39% and 2.67% over DFTFB.
A high efficiency and compact size 65nm power management module with 1.2v low-voltage PWM controller for UWB system application
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1.054472
0.05
0.05
0.05
0.025
0.006371
0.00125
0.000081
0
0
0
0
0
0
Unifying stabilization and termination in message-passing systems The paper dispels the myth that it is impossible for a message-passing program to be both terminating and stabilizing. We consider a rather general notion of termination: a terminating program eventually stops its execution after the environment ceases lo provide input. We identify termination-symmetry to be a necessary condition for a problem to admit a sollution with such properties. Our results do confirm that a number of well-known problems (e.g., consensus, leader election) do not allow a terminating and stabilizing solution. On the flip side, they show that other problems such as mutual exclusion and reliable-transmission allow such solutions. We present a message-passing solution to the mutual exclusion problem that is both stabilizing and terminating. We also desctibe an approach of adding termination to a stabilizing program. To illustrate this approach, we add termination to a stabilizing solution for the reliable transmission problem.
A Clustering Scheme For Hierarchical Control In Mufti-Hop Wireless Networks In this paper we present a clustering scheme to create a hierarchical control structure for mufti-hop wireless networks. A cluster is defined as a subset of vertices, whose induced graph is connected. In addition, a cluster is required to obey certain constraints that are useful for management and scalability of the hierarchy. All these constraints cannot be met simultaneously for general graphs, but we show how such a clustering can be obtained for wireless network topologies. Finally, we present an efficient distributed implementation of our clustering algorithm for a set of wireless nodes to create the set of desired clusters.
A Mobility Based Metric for Clustering in Mobile Ad Hoc Networks Abstract: This paper presents a novel relative mobility metric for mobile ad hoc networks (MANETs). It is based on the ratio of power levels due to successive receptions at each node from its neighbors. We propose a distributed clustering algorithm, MOBIC, based on the use of this mobility metric for selection of clusterheads, and demonstrate that it leads to more stable cluster formation than the "least clusterhead change" version of the well known Lowest-ID clustering algorithm [3]. We show reduction of as much as 33% in the rate of clusterhead changes owing to the use of the proposed technique. In a MANET that uses scalable cluster-based services, network performance metrics such as throughput and delay are tightly coupled with the frequency of cluster reorganization. Therefore, we believe that using MOBIC can result in a more stable configuration, and thus yield better performance.
Distributed reset A reset subsystem is designed that can be embedded in an arbitrary distributed system in order to allow the system processes to reset the system when necessary. Our design is layered, and comprises three main components: a leader election, a spanning tree construction, and a diffusing computation. Each of these components is self-stabilizing in the following sense: if the coordination between the up-processes in the system is ever lost (due to failures or repairs of processes and channels), then each component eventually reaches a state where coordination is regained. This capability makes our reset subsystem very robust: it can tolerate fail-stop failures and repairs of processes and channels, even when a reset is in progress
Time-optimal leader election in general networks This note presents a simple time-optimal distributed algorithm for electing a leader in a general network. For several important classes of networks this algorithm is also message-optimal and thus performs better than previous algorithms for the problem.
Leader Election Algorithms For Wireless Ad Hoc Networks We consider the problem of secure leader election and propose two cheat-proof election algorithms : Secure Extrema Finding Algorithm (SEFA) and Secure Preference-based Leader Election Algorithm (SPLEA). Both algorithms assume a synchronous distributed system in which the various rounds of election proceed in a lock-step fashion. SEFA assumes that all elector-nodes share a single common evaluation function that returns the same value at any elector-node when applied to a given candidate-node. When elector-nodes can have different preferences for a candidate-node, the scenario becomes more complicated. Our Secure Preference-based Leader Election Algorithm (SPLEA) deals with this case. Here, individual utility functions at each elector-node determine an elector-node's preference for a given candidate-node.We relax the assumption of a synchronous distributed system in our Asynchronous Extrema Finding Algorithm (AEFA) and also allow the topology to change during the election process. In AEFA, nodes can start the process of election at different times, but eventually after topological changes stop long enough for the algorithm to terminate, all nodes agree on a unique leader Our algorithm has been proven to be "weakly" self-stabilizing.
Reliable Broadcast in Radio Networks with Locally Bounded Failures This paper studies the reliable broadcast problem in a radio network with locally bounded failures. We present a sufficient condition for achievability of reliable broadcast in a general graph subject to Byzantine/crash-stop failures. We then consider the problem of reliable broadcast in an infinite grid (or finite toroidal) radio network under Byzantine and crash-stop failures. We present bounds on the maximum number of failures that may occur in any given neighborhood without rendering reliable broadcast impossible. For the Byzantine failure model, we describe an algorithm which is optimal for the grid network model, as it tolerates faults up to a previously established upper bound for this model. Our results indicate that it is possible to achieve reliable broadcast if slightly less than one-fourth fraction of nodes in any neighborhood are faulty. We also show that reliable broadcast is achievable with crash-stop failures if slightly less than half the nodes in any given neighborhood may be faulty.
On the Application of Formal Methods for Specifying and Verifying Distributed Protocols In this paper we consider the frameworks of Process Algebra and I/O Automata and we apply both towards the verification of a distributed leader-election protocol. Based on the two experiences we evaluate the approaches and draw initial conclusions with respect to their relative capabilities, strengths and usability.To the best of our knowledge, this is the first hands-on evaluation of the two models, and we view it as the cornerstone for a wider investigation of the strengths and weaknesses of the two methodologies in specifying and verifying (distributed) protocols.
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Disk Paxos We present an algorithm, called Disk Paxos, for implementing a reliable distributed system with a network of processors and disks. Like the original Paxos algorithm, Disk Paxos maintains consistency in the presence of arbitrary non-Byzantine faults. Progress can be guaranteed as long as a majority of the disks are available, even if all processors but one have failed.
A 6.5 GHz wideband CMOS low noise amplifier for multi-band use LNA based on a noise-cancelled common gate topology spans 0.1 to 6.5 GHz with a gain of 19 dB, a NF of 3 dB, and s11 < -10 dB. It is realized in 0.13-mum CMOS and dissipates 12 mW
A 52 <formula formulatype="inline"><tex Notation="TeX">$\mu$</tex> </formula>W Wake-Up Receiver With <formula formulatype="inline"><tex Notation="TeX">$-$</tex> </formula>72 dBm Sensitivity Using an Uncertain-IF Architecture A dedicated wake-up receiver may be used in wireless sensor nodes to control duty cycle and reduce network latency. However, its power dissipation must be extremely low to minimize the power consumption of the overall link. This paper describes the design of a 2 GHz receiver using a novel ldquouncertain-IFrdquo architecture, which combines MEMS-based high-Q filtering and a free-running CMOS ring o...
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.077852
0.051975
0.051975
0.045403
0.02175
0.012634
0.000307
0.000085
0.000001
0
0
0
0
0
H-Infinity Synchronization And Robust H-Infinity Synchronization Of Coupled Neural Networks With Non-Identical Nodes For coupled neural networks (CNNs) composed of non-identical nodes, the problems of H-infinity synchronization and robust H-infinity synchronization are solved in the paper. On the one side, some new H-infinity synchronization criteria for CNNs consisting of non-identical nodes with the same dimensions are acquired by exploiting the Lyapunov functional strategy and some inequality techniques. Meanwhile, considering that external disturbances are likely to produce the uncertain parameters in the modeling process of CNNs, we also investigate robust H-infinity synchronization for the considered neural network with parametric uncertainties. Furthermore, several new conditions are given to guarantee the pinning adaptive H-infinity synchronization and robust pinning adaptive H-infinity synchronization for considered neural networks under suitable pinning adaptive law. On the other side, H-infinity synchronization and robust H-infinity synchronization analysis and pinning control for CNNs consisting of non-identical nodes of different dimensions are investigated similarly. At the end, two examples are given to display the effectiveness of the derived H-infinity synchronization and robust H-infinity synchronization conditions.
The Emergence of Intelligent Enterprises: From CPS to CPSS When IEEE Intelligent Systems solicited ideas for a new department, cyberphysical systems(CPS) received overwhelming support.Cyber-Physical-Social Systems is the new name for CPS. CPSS is the enabling platform technology that will lead us to an era of intelligent enterprises and industries. Internet use and cyberspace activities have created an overwhelming demand for the rapid development and application of CPSS. CPSS must be conducted with a multidisciplinary approach involving the physical, social, and cognitive sciences and that Al-based intelligent systems will be key to any successful construction and deployment.
Pinning impulsive directed coupled delayed dynamical network and its applications The main objective of the present paper is to further investigate pinning synchronisation of a complex delayed dynamical network with directionally coupling by a single impulsive controller. By developing the analysis procedure of pinning impulsive stability for undirected coupled dynamical network previously, some simple yet general criteria of pinning impulsive synchronisation for such directed coupled network are derived analytically. It is shown that a single impulsive controller can always pin a given directed coupled network to a desired homogenous solution, including an equilibrium point, a periodic orbit, or a chaotic orbit. Subsequently, the theoretical results are illustrated by a directed small-world complex network which is a cellular neural network (CNN) and a directed scale-free complex network with the well-known Hodgkin-Huxley neuron oscillators. Numerical simulations are finally given to demonstrate the effectiveness of the proposed control methodology.
Finite-Time Cluster Synchronization of Lur'e Networks: A Nonsmooth Approach. This paper is devoted to the finite-time cluster synchronization issue of nonlinearly coupled complex networks which consist of discontinuous Lur&#39;e systems. On the basis of the definition of Filippov regularization process and the measurable selection theorem, the discontinuously nonlinear function is mapped into a function-valued set, then a measurable function is accordingly selected from the Fi...
Analysis and pinning control for passivity of coupled different dimensional neural networks. In this paper, we discuss the passivity of coupled different dimensional neural networks. On the one hand, several passivity criteria for the coupled neural networks with different dimensional nodes are proposed by making using of some inequality techniques and Lyapunov functional method. Furthermore, we study the pinning passivity of coupled different dimensional neural networks with fixed and adaptive coupling strength, and obtain some sufficient conditions to ensure the pinning passivity of the considered network by designing proper pinning controllers. On the other hand, the passivity analysis and pinning control problem for coupled different dimensional delayed neural networks are studied similarly. Finally, the effectiveness of the derived results are verified by two numerical examples.
Trajectory Tracking on Uncertain Complex Networks via NN-Based Inverse Optimal Pinning Control. A new approach for trajectory tracking on uncertain complex networks is proposed. To achieve this goal, a neural controller is applied to a small fraction of nodes (pinned ones). Such controller is composed of an on-line identifier based on a recurrent high-order neural network, and an inverse optimal controller to track the desired trajectory; a complete stability analysis is also included. In order to verify the applicability and good performance of the proposed control scheme, a representative example is simulated, which consists of a complex network with each node described by a chaotic Lorenz oscillator.
Recent Advances on Dynamical Behaviors of Coupled Neural Networks With and Without Reaction–Diffusion Terms Recently, the dynamical behaviors of coupled neural networks (CNNs) with and without reaction-diffusion terms have been widely researched due to their successful applications in different fields. This article introduces some important and interesting results on this topic. First, synchronization, passivity, and stability analysis results for various CNNs with and without reaction-diffusion terms are summarized, including the results for impulsive, time-varying, time-invariant, uncertain, fuzzy, and stochastic network models. In addition, some control methods, such as sampled-data control, pinning control, impulsive control, state feedback control, and adaptive control, have been used to realize the desired dynamical behaviors in CNNs with and without reaction-diffusion terms. In this article, these methods are summarized. Finally, some challenging and interesting problems deserving of further investigation are discussed.
Event-triggered distributed control for synchronization of multiple memristive neural networks under cyber-physical attacks. This paper investigates the synchronization of multiple memristive neural networks (MMNNs) under cyber-physical attacks through distributed event-triggered control. In the field of multi-agent dynamics, memristive neural network (MNN) is considered as a kind of switched systems because of its state-dependent parameters which can lead to the parameters mismatch during synchronization. This will increase the uncertainty of the system and affect the theoretical analysis. Also, neural network is considered as a typical nonlinear system. Therefore, the model studied in this paper is a nonlinear system with switching characteristics. In complex environments, MMNNs may receive mixed attacks, one of which is called cyber-physical attacks that may influence both communication links and MNN nodes to cause changes in topology and physical state. To tackle this issue, we construct a novel Lyapunov functional and use properties of M-matrix to get the criteria for synchronization of MMNNs under cyber-physical attacks. It is worth mentioning that the controllers in this paper are designed to be distributed under event-triggering conditions and Zeno behavior is also excluded. In addition, the algorithm of parameter selection is given to help designing the controllers. One example is given at the end of the paper to support our results.
Quick detection of difficult bugs for effective post-silicon validation We present a new technique for systematically creating postsilicon validation tests that quickly detect bugs in processor cores and uncore components (cache controllers, memory controllers, on-chip networks) of multi-core System on Chips (SoCs). Such quick detection is essential because long error detection latency, the time elapsed between the occurrence of an error due to a bug and its manifestation as an observable failure, severely limits the effectiveness of existing post-silicon validation approaches. In addition, we provide a list of realistic bug scenarios abstracted from “difficult” bugs that occurred in commercial multi-core SoCs. Our results for an OpenSPARC T2-like multi-core SoC demonstrate: 1. Error detection latencies of “typical” post-silicon validation tests can be very long, up to billions of clock cycles, especially for bugs in uncore components. 2. Our new technique shortens error detection latencies by several orders of magnitude to only a few hundred cycles for most bug scenarios. 3. Our new technique enables 2-fold increase in bug coverage. An important feature of our technique is its software-only implementation without any hardware modification. Hence, it is readily applicable to existing designs.
On-Chip Interconnection Architecture of the Tile Processor iMesh, the Tile Processor Architecture's on-chip interconnection network, connects the multicore processor's tiles with five 2D mesh networks, each specialized for a different use. Taking advantage of the five networks, the c-based iLib interconnection library efficiently maps program communication across the on-chip interconnect. The Tile Processor's first implementation, the TILE64, contains 64 cores and can execute 192 billion 32-bit operations per second at 1 GHz.
A new class of asynchronous A/D converters based on time quantization This work is a contribution to a drastic change in standard signal processing chains. The main objective is to reduce the power consumption by one or two orders of magnitude. Integrated Smart Devices and Communicating Objects are application domains targeted by this work. In this context, we present a new class of Analog-to-Digital Converters (ADCs), based on an irregular sampling of the analog signal, and an asynchronous design. Because they are not conventional, a complete design methodology is presented. It determines their characteristics given the required effective number of bits and the analog signal properties. it is shown that our approach leads to a significant reduction in terms of hardware complexity and power consumption. A prototype has been designed for speech applications, using the STMicroelectronics 0.18-μm CMOS technology. Electrical simulations prove that the factor of merit is increased by more than one order of magnitude compared to synchronous Nyquist ADCs.
Charge redistribution loss consideration in optimal charge pump design The charge redistribution loss of capacitors is reviewed, and then employed in the optimal capacitor assignment of charge pumps. The average output voltage is unambiguously defined, and efficiency due to redistribution loss is discussed. Analyses are confirmed by Hspice simulations on charge pumps designed using a 0.35 μm CMOS process.
Synthesizing information systems knowledge: A typology of literature reviews. •We proposed a typology of nine review types based on seven core dimensions.•The number of reviews in top-ranked IS journals has increased between 1999 and 2013.•Theoretical and narrative reviews are the most prevalent types in top IS journals.•We found inconsistencies in the labels used by authors to qualify IS reviews.•A majority of IS reviews reported only scholars as their target audience.
A 12-Bit Dynamic Tracking Algorithm-Based SAR ADC With Real-Time QRS Detection A 12-bit successive approximation register (SAR) ADC based on dynamic tracking algorithm and a real-time QRS-detection algorithm are proposed. The dynamic tracking algorithm features two tracking windows which are adjacent to prediction interval. This algorithm is able to track down the input signal's variation range and automatically adjust the subrange interval and update prediction code. QRS-complex detection algorithm integrates synchronous time sequential ADC and realtime QRS-detector. The chip is fabricated in a standard 0.13 μm CMOS process with a 0.6 V supply. Measurement results show that proposed ADC exhibits 10.72 effective number of bit (ENOB) and 79.63 dB spur-free dynamic range (SFDR) at 10k Hz sample rate given 41.5 Hz sinusoid input. The DNL and INL are bounded at -0.6/0.62 LSB and -0.67/1.43 LSBs. The ADC achieves FoM of 48 fJ/conversion-step at the best case. Also, the prototype is experimented with ECG signal input and extracts the heart beat signal successfully.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
Delegating computation: interactive proofs for muggles In this work we study interactive proofs for tractable languages. The (honest) prover should be efficient and run in polynomial time, or in other words a "muggle". The verifier should be super-efficient and run in nearly-linear time. These proof systems can be used for delegating computation: a server can run a computation for a client and interactively prove the correctness of the result. The client can verify the result's correctness in nearly-linear time (instead of running the entire computation itself). Previously, related questions were considered in the Holographic Proof setting by Babai, Fortnow, Levin and Szegedy, in the argument setting under computational assumptions by Kilian, and in the random oracle model by Micali. Our focus, however, is on the original interactive proof model where no assumptions are made on the computational power or adaptiveness of dishonest provers. Our main technical theorem gives a public coin interactive proof for any language computable by a log-space uniform boolean circuit with depth d and input length n. The verifier runs in time (n+d) • polylog(n) and space O(log(n)), the communication complexity is d • polylog(n), and the prover runs in time poly(n). In particular, for languages computable by log-space uniform NC (circuits of polylog(n) depth), the prover is efficient, the verifier runs in time n • polylog(n) and space O(log(n)), and the communication complexity is polylog(n). Using this theorem we make progress on several questions: We show how to construct short (polylog size) computationally sound non-interactive certificates of correctness for any log-space uniform NC computation, in the public-key model. The certificates can be verified in quasi-linear time and are for a designated verifier: each certificate is tailored to the verifier's public key. This result uses a recent transformation of Kalai and Raz from public-coin interactive proofs to one-round arguments. The soundness of the certificates is based on the existence of a PIR scheme with polylog communication. Interactive proofs with public-coin, log-space, poly-time verifiers for all of P. This settles an open question regarding the expressive power of proof systems with such verifiers. Zero-knowledge interactive proofs with communication complexity that is quasi-linear in the witness, length for any NP language verifiable in NC, based on the existence of one-way functions. Probabilistically checkable arguments (a model due to Kalai and Raz) of size polynomial in the witness length (rather than the instance length) for any NP language verifiable in NC, under computational assumptions.
State Machine Replication for the Masses with BFT-SMART The last fifteen years have seen an impressive amount of work on protocols for Byzantine fault-tolerant (BFT) state machine replication (SMR). However, there is still a need for practical and reliable software libraries implementing this technique. BFT-SMART is an open-source Java-based library implementing robust BFT state machine replication. Some of the key features of this library that distinguishes it from similar works (e.g., PBFT and UpRight) are improved reliability, modularity as a first-class property, multicore-awareness, reconfiguration support and a flexible programming interface. When compared to other SMR libraries, BFT-SMART achieves better performance and is able to withstand a number of real-world faults that previous implementations cannot.
Demystifying Fog Computing: Characterizing Architectures, Applications and Abstractions Internet of Things (IoT) has accelerated the deployment of millions of sensors at the edge of the network, through Smart City infrastructure and lifestyle devices. Cloud computing platforms are often tasked with handling these large volumes and fast streams of data from the edge. Recently, Fog computing has emerged as a concept for low-latency and resource-rich processing of these observation streams, to complement Edge and Cloud computing. In this paper, we review various dimensions of system architecture, application characteristics and platform abstractions that are manifest in this Edge, Fog and Cloud eco-system. We highlight novel capabilities of the Edge and Fog layers, such as physical and application mobility, privacy sensitivity, and a nascent runtime environment. IoT application case studies based on first-hand experiences across diverse domains drive this categorization. We also highlight the gap between the potential and the reality of Fog computing, and identify challenges that need to be overcome for the solution to be sustainable. Taken together, our article can help platform and application developers bridge the gap that remains in making Fog computing viable.
Peer-to-Peer Bidirectional Streaming Using Mobile Edge Computing P2P streaming services that deliver content between peers without using a delivery server are popular. Since P2P streaming does not have a delivery server, it is advantageous that cost reduction and load does not concentrate on a peer, but it is common that peers with short round trip time (RTT) connect with each other and delivery contents. Therefore, there is a problem that the number of hops increases and a delay occurs, and there is a problem of withdrawal tolerance that peers stop viewing and other peers cannot receive contents. In this study, we focus on bidirectional streaming where these problems are noticeable, and propose bidirectional streaming aiming at reducing the number of hops and withdrawal tolerance using edge computing. Furthermore, in this research, we verify the usefulness of the proposed system by simulation and clarify that we can realize reduction of the number of hops and improvement of withdrawal tolerance compared with conventional P2P distribution system.
EdgeKV: Decentralized, scalable, and consistent storage for the edge Edge computing moves the computation closer to the data and the data closer to the user to overcome the high latency communication of cloud computing. Storage at the edge allows data access with high speeds that enable latency-sensitive applications in areas such as autonomous driving and smart grid. However, several distributed services are typically designed for the cloud and building an efficient edge-enabled storage system is challenging because of the distributed and heterogeneous nature of the edge and its limited resources. In this paper, we propose EdgeKV, a decentralized storage system designed for the network edge. EdgeKV offers fast and reliable storage, utilizing data replication with strong consistency guarantees. With a location-transparent and interface-based design, EdgeKV can scale with a heterogeneous system of edge nodes. We implement a prototype of the EdgeKV modules in Golang and evaluate it in both the edge and cloud settings on the Grid’5000 testbed. We utilize the Yahoo! Cloud Serving Benchmark (YCSB) to analyze the system’s performance under realistic workloads. Our evaluation results show that EdgeKV outperforms the cloud storage setting with both local and global data access with an average write response time and throughput improvements of 26% and 19% respectively under the same settings. Our evaluations also show that EdgeKV can scale with the number of clients, without sacrificing performance. Finally, we discuss the energy efficiency improvement when utilizing edge resources with EdgeKV instead of a centralized cloud.
Edge Computing: Vision and Challenges. The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this pap...
Adaptive clustering for mobile wireless networks This paper describes a self-organizing, multihop, mobile radio network which relies on a code-division access scheme for multimedia support. In the proposed network architecture, nodes are organized into nonoverlapping clusters. The clusters are independently controlled, and are dynamically reconfigured as the nodes move. This network architecture has three main advantages. First, it provides spatial reuse of the bandwidth due to node clustering. Second, bandwidth can be shared or reserved in a controlled fashion in each cluster. Finally, the cluster algorithm is robust in the face of topological changes caused by node motion, node failure, and node insertion/removal. Simulation shows that this architecture provides an efficient, stable infrastructure for the integration of different types of traffic in a dynamic radio network
Quick detection of difficult bugs for effective post-silicon validation We present a new technique for systematically creating postsilicon validation tests that quickly detect bugs in processor cores and uncore components (cache controllers, memory controllers, on-chip networks) of multi-core System on Chips (SoCs). Such quick detection is essential because long error detection latency, the time elapsed between the occurrence of an error due to a bug and its manifestation as an observable failure, severely limits the effectiveness of existing post-silicon validation approaches. In addition, we provide a list of realistic bug scenarios abstracted from “difficult” bugs that occurred in commercial multi-core SoCs. Our results for an OpenSPARC T2-like multi-core SoC demonstrate: 1. Error detection latencies of “typical” post-silicon validation tests can be very long, up to billions of clock cycles, especially for bugs in uncore components. 2. Our new technique shortens error detection latencies by several orders of magnitude to only a few hundred cycles for most bug scenarios. 3. Our new technique enables 2-fold increase in bug coverage. An important feature of our technique is its software-only implementation without any hardware modification. Hence, it is readily applicable to existing designs.
Disk Paxos We present an algorithm, called Disk Paxos, for implementing a reliable distributed system with a network of processors and disks. Like the original Paxos algorithm, Disk Paxos maintains consistency in the presence of arbitrary non-Byzantine faults. Progress can be guaranteed as long as a majority of the disks are available, even if all processors but one have failed.
An area-efficient multistage 3.0- to 8.5-GHz CMOS UWB LNA using tunable active inductors An area-efficient multistage 3.0- to 8.5-GHz ultra-wideband low-noise amplifier (LNA) utilizing tunable active inductors (AIs) is presented. The AI includes a negative impedance circuit (NIC) consisting of a pair of cross-coupled NMOS transistors and is tuned to vary the gain and bandwidth (BW) of the amplifier. Fabricated in a 90-nm digital CMOS process, the proposed fully on-chip LNA occupies a core chip area of only 0.022 mm2. The measurement results show a power gain S21 of 16.0 dB, a noise figure of 3.1-4.4 dB, and an input return loss S11 of less than -10.5 dB over the 3-dB BW of 3.0-8.5 GHz. Tuning the AIs allows one to increase the gain above 18.0 dB and to extend the BW over 9.4 GHz. The LNA consumes 16.0 mW from a power supply of 1.2 V.
RockSalt: better, faster, stronger SFI for the x86 Software-based fault isolation (SFI), as used in Google's Native Client (NaCl), relies upon a conceptually simple machine-code analysis to enforce a security policy. But for complicated architectures such as the x86, it is all too easy to get the details of the analysis wrong. We have built a new checker that is smaller, faster, and has a much reduced trusted computing base when compared to Google's original analysis. The key to our approach is automatically generating the bulk of the analysis from a declarative description which we relate to a formal model of a subset of the x86 instruction set architecture. The x86 model, developed in Coq, is of independent interest and should be usable for a wide range of machine-level verification tasks.
An Opportunistic Cognitive MAC Protocol for Coexistence with WLAN In last decades, the demand of wireless spectrum has increased rapidly with the development of mobile communication services. Recent studies recognize that traditional fixed spectrum assignment does not use spectrum efficiently. Such a wasting phenomenon could be amended after the present of cognitive radio. Cognitive radio is a new type of technology that enables secondary usage to unlicensed user. This paper presents an opportunistic cognitive MAC protocol (OC-MAC) for cognitive radios to access unoccupied resource opportunistically and coexist with wireless local area network (WLAN). By a primary traffic predication model and transmission etiquette, OC-MAC avoids producing fatal damage to licensed users. Then a ns2 simulation model is developed to evaluate its performance in scenarios with coexisting WLAN and cognitive network.
Current-mode adaptively hysteretic control for buck converters with fast transient response and improved output regulation This paper presents a current-mode adaptively hysteretic control (CMAHC) technique to achieve the fast transient response for DC-DC buck converters. A complementary full range current sensor comprising of chargingpath and discharging-path sensing transistors is proposed to track the inductor current seamlessly. With the proposed current-mode adaptively hysteretic topology, the inductor current is continuously monitored, and the adaptively hysteretic threshold is dynamically adjusted according to the feedback information comes from the output voltage level. Therefore, a fast load-transient response can be achieved. Besides, the output regulation performance is also improved by the proposed dynamic current-scaling circuitry (DCSC). Moreover, the proposed CMAHC topology can be used in a nearly zero RESR design configuration. The prototype fabricated using TSMC 0.25μm CMOS process occupies the area of 1.78mm2 including all bonding pads. Experimental results show that the output voltage ripple is smaller than 30mV over a wide loading current from 0 mA to 500 mA with maximum power conversion efficiency higher than 90%. The recovery time from light to heavy load (100 to 500 mA) is smaller than 5μs.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.112
0.1
0.1
0.1
0.1
0.020667
0.000001
0
0
0
0
0
0
0
A 3.1-8 GHz CMOS UWB front-end receiver A two-stage down-conversion architecture for 3.1-8 GHz ultra-wideband receiver front-end is designed which uses a local oscillator frequency equal to half the input frequency. The down-conversion technique is performed in two steps based on half-RF architecture to produce baseband signal. The proposed technique is implemented in 0.18 μm CMOS technology which achieves a conversion gain ranges from 36.1-32.4 dB and noise figure of 5.4-8.3 dB across the bandwidth.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Three matrix conditions for the reduction of finite automata based on the theory of semi-tensor product of matrices
Deadlock avoidance in flexible manufacturing systems using finite automata A distinguishing feature of a flexible manufacturing system (FMS) is the ability to perform multiple tasks in one machine or workstation (alternative machining) and the ability to process parts according to more than one sequence of operations (alternative sequencing). In this paper, we address the issue of deadlock avoidance in systems having these characteristics. A deadlock-free and maximally permissive control policy that incorporates this flexibility is developed based on finite automata models of part process plans and the FMS. The resulting supervisory controller is used for dynamic evaluation of deadlock avoidance based on the remaining processing requirements of the parts
Observability of hybrid automata by abstraction In this paper, we deal with the observability problem of a class of Hybrid Systems whose output is a timed string on a finite alphabet. We determine under which conditions it is always possible to immediately detect, using the observed output, when the system enters a given discrete state. We illustrate how to construct a Timed Automaton that is an abstraction of the given Hybrid System, and that preserves its observability properties. Moreover, we propose a verification algorithm with polynomial complexity for checking the observability of the Timed Automaton, and a constructive procedure for an observer of the discrete state.
Decentralized observability of discrete event systems with synchronizations. This paper deals with the problem of decentralized observability of discrete event systems. We consider a set of sites each capable of observing a subset of the total event set. When a synchronization occurs, each site transmits its own observation to a coordinator that decides if the word observed belongs to a reference language K or not. Two different properties are studied: uniform q-observability and q-sync observability. It is proved that both properties are decidable for regular languages. Finally, under the assumption that languages K and L are regular, and all the events are observable by at least one site, we propose a procedure to determine the instants at which synchronization should occur to detect the occurrence of any word not in K, as soon as it occurs. The advantage of the proposed approach is that most of the burdensome computations can be moved off-line.
Observability of Finite Labeled Transition Systems. Finite labeled transition systems are nondeterministic and nontotal systems with finitely many inputs, states, and outputs. This paper provides algorithms for verifying the observability of finite labeled transition systems in the so-called multiple-experiment case, the simple-experiment case, and the arbitrary-experiment case, respectively, where these algorithms run in exponential time, exponent...
Force Sensorless Admittance Control for Teleoperation of Uncertain Robot Manipulator Using Neural Networks In this paper, a force sensorless control scheme based on neural networks (NNs) is developed for interaction between robot manipulators and human arms in physical collision. In this scheme, the trajectory is generated by using geometry vector method with Kinect sensor. To comply with the external torque from the environment, this paper presents a sensorless admittance control approach in joint spa...
Observability, Reconstructibility and State Observers of Boolean Control Networks The aim of this paper is to introduce and characterize observability and reconstructibility properties for Boolean networks and Boolean control networks, described according to the algebraic approach proposed by D. Cheng and co-authors in the series of papers [3], [6], [7] and in the recent monography . A complete characterization of these properties, based both on the Boolean matrices involved in the network description and on the corresponding digraphs, is provided. Finally, the problem of state observer design for reconstructible BNs and BCNs is addressed, and two different solutions are proposed.
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Fuzzy tracking control design for nonlinear dynamic systems via T-S fuzzy model This study introduces a fuzzy control design method for nonlinear systems with a guaranteed H∞ model reference tracking performance. First, the Takagi and Sugeno (TS) fuzzy model is employed to represent a nonlinear system. Next, based on the fuzzy model, a fuzzy observer-based fuzzy controller is developed to reduce the tracking error as small as possible for all bounded reference inputs. The advantage of proposed tracking control design is that only a simple fuzzy controller is used in our approach without feedback linearization technique and complicated adaptive scheme. By the proposed method, the fuzzy tracking control design problem is parameterized in terms of a linear matrix inequality problem (LMIP). The LMIP can be solved very efficiently using the convex optimization techniques. Simulation example is given to illustrate the design procedures and tracking performance of the proposed method
Exploring an unknown graph It is desired to explore all edges of an unknown directed, strongly connected graph. At each point one has a map of all nodes and edges visited, one can recognize these nodes and edges upon seeing them again, and it is known how many unexplored edges emanate from each node visited. The goal is to minimize the ratio of the total number of edges traversed to the optimum number of traversals had the graph been known. For Eulerian graphs this ratio cannot be better than 2, and 2 is achievable by a simple algorithm. In contrast, the ratio is unbounded when the deficiency of the graph (the number of edges that have to be added to make it Eulerian) is unbounded. The main result is an algorithm that achieves a bounded ratio when the deficiency is bounded; unfortunately the ratio is exponential in the deficiency. It is also shown that, when partial information about the graph is available, minimizing the worst-case ratio is PSPACE-complete.
An architecture for survivable coordination in large distributed systems Coordination among processes in a distributed system can be rendered very complex in a large-scale system where messages may be delayed or lost and when processes may participate only transiently or behave arbitrarily, e.g., after suffering a security breach. In this paper, we propose a scalable architecture to support coordination in such extreme conditions. Our architecture consists of a collection of persistent data servers that implement simple shared data abstractions for clients, without trusting the clients or even the servers themselves. We show that, by interacting with these untrusted servers, clients can solve distributed consensus, a powerful and fundamental coordination primitive. Our architecture is very practical and we describe the implementation of its main components in a system called Fleet.
Modeling of software radio aspects by mapping of SDL and CORBA With the evolution of 3rd generation mobile communications standardization, the software radio concept has the potential to offer a pragmatic solution - a software implementation that allows the mobile terminal to adapt dynamically to its radio environment. The mapping of SDL and CORBA mechanisms is introduced, in order to provide a generic platform for the implementation of future mobile services, supporting standardized interfaces and manufacturer platform independent object and service functionality description. For the functional entity diagram model, it is proposed that the functional entities be designed as objects, the functional entities group as 'open' object oriented SDL platforms, and the interfaces between them as CORBA IDLs, communicating via the ORB in a generic implementation and location independent way. The functional entity groups are proposed to be modeled as SDL block types, while the functional entities and sub-entities as SDL process and service types. The objects interact with each other like client or server objects requesting or receiving services from other objects. Every object has a CORBA IDL interface, which allows every component to be distributed in an optimum way by providing a standardized infrastructure, ensuring interoperability, flexibility, reusability, transparency and management capabilities.
Reduction and IR-drop compensations techniques for reliable neuromorphic computing systems Neuromorphic computing system (NCS) is a promising architecture to combat the well-known memory bottleneck in Von Neumann architecture. The recent breakthrough on memristor devices made an important step toward realizing a low-power, small-footprint NCS on-a-chip. However, the currently low manufacturing reliability of nano-devices and the voltage IR-drop along metal wires and memristors arrays severely limits the scale of memristor crossbar based NCS and hinders the design scalability. In this work, we propose a novel system reduction scheme that significantly lowers the required dimension of the memristor crossbars in NCS while maintaining high computing accuracy. An IR-drop compensation technique is also proposed to overcome the adverse impacts of the wire resistance and the sneak-path problem in large memristor crossbar designs. Our simulation results show that the proposed techniques can improve computing accuracy by 27.0% and 38.7% less circuit area compared to the original NCS design.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.24
0.24
0.24
0.24
0.24
0.24
0.048
0
0
0
0
0
0
0
Cognitive radio: an enabling technology for the green radio communications concept In this paper, Cognitive Radio (CR) is proposed as an efficient technology to meet the Green communications concept. First of all, the concept of "green communications" is extended to the radio communications world. The main topics, described for example in the call for papers of the first Greencom09 Workshop include energy-efficient network, protocols, devices and energy management. But to reduce the global CO2 emission to protect our environment is not the sole way to address this green concept in wireless communications. The proper use and the optimal sharing of spectrum resources is also a very important topic. In this paper the electromagnetic waves can be considered as a pollution for other users and we shall also deal with this problem. Sustainable development should also address the human aspects both from the social and the health point of views. This paper demonstrates that CR may be a very good technology for dealing with the green radio communication, by describing in detail several examples.
Decision making for cognitive radio equipment: analysis of the first 10 years of exploration. This article draws a general retrospective view on the first 10 years of cognitive radio (CR). More specifically, we explore in this article decision making and learning for CR from an equipment perspective. Thus, this article depicts the main decision making problems addressed by the community as general dynamic configuration adaptation (DCA) problems and discuss the suggested solution proposed in the literature to tackle them. Within this framework dynamic spectrum management is briefly introduced as a specific instantiation of DCA problems. We identified, in our analysis study, three dimensions of constrains: the environment's, the equipment's and the user's related constrains. Moreover, we define and use the notion of a priori knowledge, to show that the tackled challenges by the radio community during first 10 years of CR to solve decision making problems have often the same design space, however they differ by the a priori knowledge they assume available. Consequently, we suggest in this article, the "a priori knowledge" as a classification criteria to discriminate the main proposed techniques in the literature to solve configuration adaptation decision making problems. We finally discuss the impact of sensing errors on the decision making process as a prospective analysis.
Frequency domain interpretation of power ratio metric for cognitive radio systems Software radio (SWR) is an enabling technology for cognitive radio (CR) systems which promises to (de) modulate any signal, at any frequency. SWR signal therefore is composed of different standard&#39;s signals, and each standard&#39;s signal is either multicarrier or multiplex of single carriers. This combination leads to high temporal fluctuations and thus SWR signal inherits high peak to average power ...
The software radio architecture As communications technology continues its rapid transition from analog to digital, more functions of contemporary radio systems are implemented in software, leading toward the software radio. This article provides a tutorial review of software radio architectures and technology, highlighting benefits, pitfalls, and lessons learned. This includes a closer look at the canonical functional partitioning of channel coding into antenna, RF, IF, baseband, and bitstream segments. A more detailed look at the estimation of demand for critical resources is key. This leads to a discussion of affordable hardware configurations, the mapping of functions to component hardware, and related software tools. This article then concludes with a brief treatment of the economics and likely future directions of software radio technology
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
Achievable rates in cognitive radio channels Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Understanding Availability This paper addresses a simple, yet fundamental question in the design of peer-to-peer systems: What does it mean when we say "availability" and how does this understand- ing impact the engineering of practical systems? We ar- gue that existing measurements and models do not capture the complex time-varying nature of availability in today's peer-to-peer environments. Further, we show that unfore- seen methodological shortcomings have dramatically biased previous analyses of this phenomenon. As the basis of our study, we empirically characterize the availability of a large peer-to-peer system over a period of 7 days, analyze the de- pendence of the underlying availability distributions, mea- sure host turnover in the system, and discuss how these re- sults may affect the design of high-availability peer-to-peer services.
Interprocedural pointer alias analysis We present practical approximation methods for computing and representing interprocedural aliases for a program written in a language that includes pointers, reference parameters, and recursion. We present the following contributions: (1) a framework for interprocedural pointer alias analysis that handles function pointers by constructing the program call graph while alias analysis is being performed; (2) a flow-sensitive interprocedural pointer alias analysis algorithm; (3) a flow-insensitive interprocedural pointer alias analysis algorithm; (4) a flow-insensitive interprocedural pointer alias analysis algorithm that incorporates kill information to improve precision; (5) empirical measurements of the efficiency and precision of the three interprocedural alias analysis algorithms.
Using Field-Repairable Control Logic to Correct Design Errors in Microprocessors Functional correctness is a vital attribute of any hardware design. Unfortunately, due to extremely complex architectures, widespread components, such as microprocessors, are often released with latent bugs. The inability of modern verification tools to handle the fast growth of design complexity exacerbates the problem even further. In this paper, we propose a novel hardware-patching mechanism, called the field-repairable control logic (FRCL), that is designed for in-the-field correction of errors in the design's control logic-the most common type of defects, as our analysis demonstrates. Our solution introduces an additional component in the processor's hardware, a state matcher, that can be programmed to identify erroneous configurations using signals in the critical control state of the processor. Once a flawed configuration is ldquomatched,rdquo the processor switches into a degraded mode, a mode of operation which excludes most features of the system and is simple enough to be formally verified, yet still capable to execute the full instruction-set architecture at one instruction at a time. Once the program segment exposing the design flaw has been executed in a degraded mode, we can switch the processor back to its full-performance mode. In this paper, we analyze a range of approaches to selecting signals comprising the processor's critical control state and evaluate their effectiveness in representing a variety of design errors. We also introduce a new metric (average specificity per signal) that encodes the bug-detection capability and amount of control state of a particular critical signal set. We demonstrate that the FRCL can support the detection and correction of multiple design errors with a performance impact of less than 5% as long as the incidence of the flawed configurations is below 1% of dynamic instructions. In addition, the area impact of our solution is less than 2% for the two microprocessor designs that we investigated in our experiments.
A 0.5 V 1.1 MS/sec 6.3 fJ/Conversion-Step SAR-ADC With Tri-Level Comparator in 40 nm CMOS This paper presents an extremely low-voltage operation and power efficient successive-approximation-register (SAR) analog-to-digital converter (ADC). Tri-level comparator is proposed to relax the speed requirement of the comparator and decrease the resolution of internal Digital-to-Analog Converter (DAC) by 1-bit. The internal charge redistribution DAC employs unit capacitance of 0.5 fF and ADC operates at nearly thermal noise limitation. To deal with the problem of capacitor mismatch, reconfigurable capacitor array and calibration procedure were developed. The prototype ADC fabricated using 40 nm CMOS process achieves 46.8 dB SNDR and 58.2 dB SFDR with 1.1 MS/sec at 0.5 V power supply. The FoM is 6.3-fJ/conversion step and the chip die area is only 160 μm × 70 μm.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.1
0.1
0.05
0.000365
0
0
0
0
0
0
0
0
0
0
A Digitally Dynamic Power Supply Technique for 16-Channel 12 V-Tolerant Stimulator Realized in a 0.18- μm 1.8-V/3.3-V Low-Voltage CMOS Process. A new digitally dynamic power supply technique for 16-channel 12-V-tolerant stimulator is proposed and realized in a 0.18-μm 1.8-V/3.3-V CMOS process. The proposed stimulator uses four stacked transistors as the pull-down switch and pull-up switch to withstand 4 times the nominal supply voltage (4 × V DD). With the dc input voltage of 3.3 V, the regulated three-stage charge pump, which is capable ...
Compact, Energy-Efficient High-Frequency Switched Capacitor Neural Stimulator With Active Charge Balancing. Safety and energy efficiency are two major concerns for implantable neural stimulators. This paper presents a novel high-frequency, switched capacitor (HFSC) stimulation and active charge balancing scheme, which achieves high energy efficiency and well-controlled stimulation charge in the presence of large electrode impedance variations. Furthermore, the HFSC can be implemented in a compact size w...
An Ultra High-Frequency 8-Channel Neurostimulator Circuit with 68% Peak Power Efficiency. In order to recruit neurons in excitable tissue, constant current neural stimulators are commonly used. Recently, ultra high-frequency (UHF) stimulation has been proposed and proven to have the same efficacy as constant-current stimulation [1]. This paper presents the design, integrated circuit (IC) implementation and measurement results of a power efficient multichannel UHF neural stimulator. The core of the neurostimulator is based on our previously proposed architecture of an inductor-based buck-boost DC-DC converter without the external output capacitor [2]. The ultimate goal of this work is to increase the power efficiency of the UHF stimulator for multiple-channel operation, while keeping the number of external components minimal. To this end, a number of novel approaches were employed in the integrated circuit design domain. More specifically, a novel zero-current detection scheme is proposed. It allows to remove the freewheel diode typically used in DC-DC converters to prevent current to flow back from the load to the inductor. Furthermore, a gate-driver circuit is implemented which allows the use of thin gate-oxide transistors as high-voltage switches. By doing so, the need for a high-voltage supply is eliminated and the stimulator is powered up from a 3.5V input voltage. Both the current detection technique and the gate driving circuit of the current implementation allow to boost the power efficiency up to 300% when compared to previous UHF stimulator works. A peak power efficiency of 68% is achieved. The circuit is implemented in a 0.18 μm HV process, and the total chip area is 3.65 <mm></2>.
A Trimodal Wireless Implantable Neural Interface System-on-Chip A wireless and battery-less trimodal neural interface system-on-chip (SoC), capable of 16-ch neural recording, 8-ch electrical stimulation, and 16-ch optical stimulation, all integrated on a 5 × 3 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> chip fabricated in 0.35-μm standard CMOS process. The trimodal SoC is designed to be inductively powered and communicated. The downlink data telemetry utilizes on-off keying pulse-position modulation (OOK-PPM) of the power carrier to deliver configuration and control commands at 50 kbps. The analog front-end (AFE) provides adjustable mid-band gain of 55-70 dB, low/high cut-off frequencies of 1-100 Hz/10 kHz, and input-referred noise of 3.46 μV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">rms</sub> within 1 Hz-50 kHz band. AFE outputs of every two-channel are digitized by a 50 kS/s 10-bit SAR-ADC, and multiplexed together to form a 6.78 Mbps data stream to be sent out by OOK modulating a 434 MHz RF carrier through a power amplifier (PA) and 6 cm monopole antenna, which form the uplink data telemetry. Optical stimulation has a switched-capacitor based stimulation (SCS) architecture, which can sequentially charge four storage capacitor banks up to 4 V and discharge them in selected μLEDs at instantaneous current levels of up to 24.8 mA on demand. Electrical stimulation is supported by four independently driven stimulating sites at 5-bit controllable current levels in ±(25-775) μA range, while active/passive charge balancing circuits ensure safety. In vivo testing was conducted on four anesthetized rats to verify the functionality of the trimodal SoC.
A fully integrated low-power BPSK demodulator for implantable medical devices During the past decades, research has progressed on the biomedical implantable electronic devices that require power and data communication through wireless inductive links. In this paper, we present a fully integrated binary phase-shift keying (BPSK) demodulator, which is based on a hard-limited COSTAS loop topology, dedicated to such implantable medical devices. The experimental results of the proposed demodulator show a data transmission rate of 1.12 Mbps, less than 0.7 mW consumption under a supply voltage of 1.8 V, and silicon area of 0.2 mm2 in the Taiwan Semiconductor Manufacturing Company (TSMC) CMOS 0.18-μm technology. The transmitter satisfies the requirement of applications relative to high forward-transferring data rate, such as cortical stimulation. Moreover, the employment of BPSK demodulation along with a passive modulation method allows full-duplex data communication between an external controller and the implantable device, which may improve the controllability and observability of the overall implanted system.
A Minimally Invasive 64-Channel Wireless μECoG Implant Emerging applications in brain-machine interface systems require high-resolution, chronic multisite cortical recordings, which cannot be obtained with existing technologies due to high power consumption, high invasiveness, or inability to transmit data wirelessly. In this paper, we describe a microsystem based on electrocorticography (ECoG) that overcomes these difficulties, enabling chronic recording and wireless transmission of neural signals from the surface of the cerebral cortex. The device is comprised of a highly flexible, high-density, polymer-based 64-channel electrode array and a flexible antenna, bonded to 2.4 mm × 2.4 mm CMOS integrated circuit (IC) that performs 64-channel acquisition, wireless power and data transmission. The IC digitizes the signal from each electrode at 1 kS/s with 1.2 μV input referred noise, and transmits the serialized data using a 1 Mb/s backscattering modulator. A dual-mode power-receiving rectifier reduces data-dependent supply ripple, enabling the integration of small decoupling capacitors on chip and eliminating the need for external components. Design techniques in the wireless and baseband circuits result in over 16× reduction in die area with a simultaneous 3× improvement in power efficiency over the state of the art. The IC consumes 225 μW and can be powered by an external reader transmitting 12 mW at 300 MHz, which is over 3× lower than IEEE and FCC regulations.
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Broadband MIMO-OFDM Wireless Communications Orthogonal frequency division multiplexing (OFDM) is a popular method for high data rate wireless transmission. OFDM may be combined with antenna arrays at the transmitter and receiver to increase the diversity gain and/or to enhance the system capacity on time-varying and frequency-selective channels, resulting in a multiple-input multiple-output (MIMO) configuration. The paper explores various p...
Model predictive control: theory and practice—a survey We refer to Model Predictive Control (MPC) as that family of controllers in which there is a direct use of an explicit and separately identifiable model. Control design methods based on the MPC concept have found wide acceptance in industrial applications and have been studied by academia. The reason for such popularity is the ability of MPC designs to yield high performance control systems capable of operating without expert intervention for long periods of time. In this paper the issues of importance that any control system should address are stated. MPC techniques are then reviewed in the light of these issues in order to point out their advantages in design and implementation. A number of design techniques emanating from MPC, namely Dynamic Matrix Control, Model Algorithmic Control, Inferential Control and Internal Model Control, are put in perspective with respect to each other and the relation to more traditional methods like Linear Quadratic Control is examined. The flexible constraint handling capabilities of MPC are shown to be a significant advantage in the context of the overall operating objectives of the process industries and the 1-, 2-, and ∞-norm formulations of the performance objective are discussed. The application of MPC to non-linear systems is examined and it is shown that its main attractions carry over. Finally, it is explained that though MPC is not inherently more or less robust than classical feedback, it can be adjusted more easily for robustness.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Linear Amplification with Nonlinear Components A technique for producing bandpass linear amplification with nonlinear components (LINC) is described. The bandpass signal first is separated into two constant envelope component signals. All of the amplitude and phase information of the original bandpass signal is contained in phase modulation on the component signals. These constant envelope signals can be amplified or translated in frequency by amplifiers or mixers which have nonlinear input-output amplitude transfer characteristics. Passive linear combining of the amplified and/or translated component signals produces an amplified and/or translated replica of the original signal.
Recurrent-Fuzzy-Neural-Network-Controlled Linear Induction Motor Servo Drive Using Genetic Algorithms A recurrent fuzzy neural network (RFNN) controller based on real-time genetic algorithms (GAs) is developed for a linear induction motor (LIM) servo drive in this paper. First, the dynamic model of an indirect field-oriented LIM servo drive is derived. Then, an online training RFNN with a backpropagation algorithm is introduced as the tracking controller. Moreover, to guarantee the global convergence of tracking error, a real-time GA is developed to search the optimal learning rates of the RFNN online. The GA-based RFNN control system is proposed to control the mover of the LIM for periodic motion. The theoretical analyses for the proposed GA-based RFNN controller are described in detail. Finally, simulated and experimental results show that the proposed controller provides high-performance dynamic characteristics and is robust with regard to plant parameter variations and external load disturbance
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.24
0.24
0.24
0.06
0.016
0.003529
0
0
0
0
0
0
0
0
Observer-Based Event-Triggered Adaptive Fuzzy Control for Leader-Following Consensus of Nonlinear Strict-Feedback Systems In this article, the leader-following consensus problem via the event-triggered control technique is studied for the nonlinear strict-feedback systems with unmeasurable states. The follower's nonlinear dynamics is approximated using the fuzzy-logic systems, and the fuzzy weights are updated in a nonperiodic manner. By introducing a fuzzy state observer to reconstruct the system states, an observer-based event-triggered adaptive fuzzy control and a novel event-triggered condition are designed, simultaneously. In addition, the nonzero positive lower bound on interevent intervals is presented to avoid the Zeno behavior. It is proved via an extension of the Lyapunov approach that ultimately bounded control is achieved for the leader-following consensus of the considered multiagent systems. One remarkable advantage of the proposed control protocol is that the control law and fuzzy weights are updated only when the event-triggered condition is violated, which can greatly decrease the data transmission and communication resource. The simulation results are provided to show the effectiveness of the proposed control strategy and the theoretical analysis.
Perception-Based Data Reduction and Transmission of Haptic Data in Telepresence and Teleaction Systems We present a novel approach for the transmission of haptic data in telepresence and teleaction systems. The goal of this work is to reduce the packet rate between an operator and a teleoperator without impairing the immersiveness of the system. Our approach exploits the properties of human haptic perception and is, more specifically, based on the concept of just noticeable differences. In our scheme, updates of the haptic amplitude values are signaled across the network only if the change of a haptic stimulus is detectable by the human operator. We investigate haptic data communication for a 1 degree-of-freedom (DoF) and a 3 DoF teleaction system. Our experimental results show that the presented approach is able to reduce the packet rate between the operator and teleoperator by up to 90% of the original rate without affecting the performance of the system.
Design of a Pressure Control System With Dead Band and Time Delay This paper investigates the control of pressure in a hydraulic circuit containing a dead band and a time varying delay. The dead band is considered as a linear term and a perturbation. A sliding mode controller is designed. Stability conditions are established by making use of Lyapunov Krasovskii functionals, non-perfect time delay estimation is studied and a condition for the effect of uncertainties on the dead zone on stability is derived. Also the effect of different LMI formulations on conservativeness is studied. The control law is tested in practice.
Consensus in switching networks with sectorial nonlinear couplings: Absolute stability approach Consensus algorithms for multi-agent networks with high-order agent dynamics, time-varying topology, and uncertain symmetric nonlinear couplings are considered. Convergence conditions for these algorithms are obtained by means of the Kalman-Yakubovich-Popov lemma and absolute stability techniques. The conditions are similar in spirit and extend the celebrated circle criterion for the stability of Lurie systems.
A Distributed Dynamic Event-Triggered Control Approach to Consensus of Linear Multiagent Systems With Directed Networks. In this paper, we study the consensus problem for a class of linear multiagent systems, where the communication networks are directed. First, a dynamic event-triggering mechanism is introduced, including some existing static event-triggering mechanisms as its special cases. Second, based on the dynamic event-triggering mechanism, a distributed control protocol is developed, which ensures that all agents can reach consensus with an exponential convergence rate. Third, it is shown that, with the dynamic event-triggering mechanism, the minimum interevent time between any two consecutive triggering instants can be prolonged and no agent exhibits Zeno behavior. Finally, an algorithm is provided to avoid continuous communication when the dynamic event-triggering mechanism is implemented. The effectiveness of the results is confirmed through a numerical example.
On QUAD, Lipschitz, and Contracting Vector Fields for Consensus and Synchronization of Networks. In this paper, a relationship is discussed between three common assumptions made in the literature to prove local or global asymptotic stability of the synchronization manifold in networks of coupled nonlinear dynamical systems. In such networks, each node, when uncoupled, is described by a nonlinear ordinary differential equation of the form ẋ = f (x,t) . In this paper, we establish links between...
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
Design Techniques for Fully Integrated Switched-Capacitor DC-DC Converters. This paper describes design techniques to maximize the efficiency and power density of fully integrated switched-capacitor (SC) DC-DC converters. Circuit design methods are proposed to enable simplified gate drivers while supporting multiple topologies (and hence output voltages). These methods are verified by a proof-of-concept converter prototype implemented in 0.374 mm2 of a 32 nm SOI process. ...
Distributed reset A reset subsystem is designed that can be embedded in an arbitrary distributed system in order to allow the system processes to reset the system when necessary. Our design is layered, and comprises three main components: a leader election, a spanning tree construction, and a diffusing computation. Each of these components is self-stabilizing in the following sense: if the coordination between the up-processes in the system is ever lost (due to failures or repairs of processes and channels), then each component eventually reaches a state where coordination is regained. This capability makes our reset subsystem very robust: it can tolerate fail-stop failures and repairs of processes and channels, even when a reset is in progress
Winnowing: local algorithms for document fingerprinting Digital content is for copying: quotation, revision, plagiarism, and file sharing all create copies. Document fingerprinting is concerned with accurately identifying copying, including small partial copies, within large sets of documents.We introduce the class of local document fingerprinting algorithms, which seems to capture an essential property of any finger-printing technique guaranteed to detect copies. We prove a novel lower bound on the performance of any local algorithm. We also develop winnowing, an efficient local fingerprinting algorithm, and show that winnowing's performance is within 33% of the lower bound. Finally, we also give experimental results on Web data, and report experience with MOSS, a widely-used plagiarism detection service.
Yet another MicroArchitectural Attack:: exploiting I-Cache MicroArchitectural Attacks (MA), which can be considered as a special form of Side-Channel Analysis, exploit microarchitectural functionalities of processor implementations and can compromise the security of computational environments even in the presence of sophisticated protection mechanisms like virtualization and sandboxing. This newly evolving research area has attracted significant interest due to the broad application range and the potentials of these attacks. Cache Analysis and Branch Prediction Analysis were the only types of MA that had been known publicly. In this paper, we introduce Instruction Cache (I-Cache) as yet another source of MA and present our experimental results which clearly prove the practicality and danger of I-Cache Attacks.
Fully Integrated CMOS Power Amplifier With Efficiency Enhancement at Power Back-Off This paper presents a new approach for power amplifier design using deep submicron CMOS technologies. A transformer based voltage combiner is proposed to combine power generated from several low-voltage CMOS amplifiers. Unlike other voltage combining transformers, the architecture presented in this paper provides greater flexibility to access and control the individual amplifiers in a voltage comb...
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Flash-Cosmos: In-Flash Bulk Bitwise Operations Using Inherent Computation Capability of NAND Flash Memory Bulk bitwise operations, i. e., bitwise operations on large bit vectors, are prevalent in a wide range of important application domains, including databases, graph processing, genome analysis, cryptography, and hyper-dimensional computing. In conventional systems, the performance and energy efficiency of bulk bitwise operations are bottlenecked by data movement between the compute units (e.g., CPUs and GPUs) and the memory hierarchy. In-flash processing (i. e., processing data inside NAND flash chips) has a high potential to accelerate bulk bitwise operations by fundamentally reducing data movement through the entire memory hierarchy, especially when the processed data does not fit into main memory. We identify two key limitations of the state-of-the-art in-flash processing technique for bulk bitwise operations; (i) it falls short of maximally exploiting the bit-level parallelism of bulk bitwise operations that could be enabled by leveraging the unique cell-array architecture and operating principles of NAND flash memory; (ii) it is unreliable because it is not designed to take into account the highly error-prone nature of NAND flash memory. We propose Flash-Cosmos (Flash C omputation with-O ne-S hot M ulti-O perand S ensing), a new in-flash processing technique that significantly increases the performance and energy efficiency of bulk bitwise operations while providing high reliability. Flash-Cosmos introduces two key mechanisms that can be easily supported in modern NAND flash chips: (i) M ulti-W ordline S ensing (MWS), which enables bulk bitwise operations on a large number of operands (tens of operands) with a single sensing operation, and (ii) E nhanced S LC-mode P rogramming (ESP), which enables reliable computation inside NAND flash memory. We demonstrate the feasibility of performing bulk bitwise operations with high reliability in Flash-Cosmos by testing 160 real 3D NAND flash chips. Our evaluation shows that Flash-Cosmos improves average performance and energy efficiency by $3.5 \times /32 \times$ and $3.3 \times /95 \times$, respectively, over the state-of-the-art in-flash/outside-storage processing techniques across three real-world applications.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An Active Dead-Time Control Circuit With Timing Elements for a 45-V Input 1-MHz Half-Bridge Converter In this study, a dead-time control circuit is proposed to generate independent delays for the high and low sides of half-bridge converter switches. In addition to greatly decreasing the losses of power converters, the proposed method mitigates the shoot-through current through the application of superimposed power switches. The circuit presented here comprises a switched capacitor architecture and...
A Wide-range Reconfigurable Deadtime and Delay Element for Optimal-Power Conversion A reconfigurable dead-time circuit intended for optimum power-converters’ operation is presented. The circuit provides a programmable delay element to produce a wide range of dead-time delays for different power conversion’s applications with various loads and input voltages. The circuit utilises two tunable Schmitt triggers, two reconfigurable capacitive banks, and two adjustable-current sources....
Stacked-Chip Implementation of On-Chip Buck Converter for Distributed Power Supply System in SiPs An on-chip buck converter which is implemented by stacking chips and suitable for on-chip distributed power supply systems is proposed. The operation of the converter with 3-D chip stacking is experimentally verified for the first time. The manufactured converter achieves a maximum power efficiency of 62% for an output current of 70 mA and a voltage conversion ratio of 0.7 with a switching frequen...
Hybrid Temperature Sensor Network for Area-Efficient On-Chip Thermal Map Sensing Spatial thermal distribution of a chip is an essential information for dynamic thermal management. To get a rich thermal map, the sensor area is required to be reduced radically. However, squeezing the sensor size is about to face its physical limitation. In this background, we propose an area-efficient thermal sensing technique: hybrid temperature sensor network. The proposed sensor architecture fully exploits the spatial low-pass filtering effect of thermal systems, which implies that most of the thermal information resides in very low spatial frequency region. Our on-chip sensor network consists of a small number of accurate thermal sensors and a large number of tiny relative thermal sensors, responsible for low and high spatial frequency thermal information respectively. By combining these sensor readouts, a thermal map upsampler synthesizes a higher spatial resolution thermal map with a proposed guided upsampling algorithm.
An 18 V Input 10 MHz Buck Converter With 125 ps Mixed-Signal Dead Time Control. A highly integrated synchronous buck converter with a predictive dead time control for input voltages >18 V with 10 MHz switching frequency is presented. A high resolution dead time of ~125 ps allows to reduce dead time dependent losses without requiring body diode conduction to evaluate the dead time. High resolution is achieved by frequency compensated sampling of the switching node and by an 8 ...
A 12-Level Series-Capacitor 48-1V DC–DC Converter With On-Chip Switch and GaN Hybrid Power Conversion This work presents a 48-1V dc–dc converter with an on-chip switch and gallium nitride (GaN) hybrid power conversion. By series connecting a 12-level Dickson switched-capacitor with a two-phase switched-inductor circuit, the capacitors take over most of the 48-V voltage stresses. The circuit, thus, reduces to an equivalent 4-1V converter, making the on-chip 5-V transistor applicable for a 48-V high...
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Time-delay systems: an overview of some recent advances and open problems After presenting some motivations for the study of time-delay system, this paper recalls modifications (models, stability, structure) arising from the presence of the delay phenomenon. A brief overview of some control approaches is then provided, the sliding mode and time-delay controls in particular. Lastly, some open problems are discussed: the constructive use of the delayed inputs, the digital implementation of distributed delays, the control via the delay, and the handling of information related to the delay value.
Bayesian Network Classifiers Recent work in supervised learning has shown that a surprisinglysimple Bayesian classifier with strong assumptions of independence amongfeatures, called naive Bayes, is competitive withstate-of-the-art classifiers such as C4.5. This fact raises the question ofwhether a classifier with less restrictive assumptions can perform evenbetter. In this paper we evaluate approaches for inducing classifiers fromdata, based on the theory of learning Bayesian networks. These networks are factored representations ofprobability distributions that generalize the naive Bayesian classifier andexplicitly represent statements about independence. Among these approacheswe single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same timemaintains the computational simplicity (no search involved) and robustnessthat characterize naive Bayes. We experimentally tested these approaches,using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for featureselection.
Time-optimal leader election in general networks This note presents a simple time-optimal distributed algorithm for electing a leader in a general network. For several important classes of networks this algorithm is also message-optimal and thus performs better than previous algorithms for the problem.
DySER: Unifying Functionality and Parallelism Specialization for Energy-Efficient Computing The DySER (Dynamically Specializing Execution Resources) architecture supports both functionality specialization and parallelism specialization. By dynamically specializing frequently executing regions and applying parallelism mechanisms, DySER provides efficient functionality and parallelism specialization. It outperforms an out-of-order CPU, Streaming SIMD Extensions (SSE) acceleration, and GPU acceleration while consuming less energy. The full-system field-programmable gate array (FPGA) prototype of DySER integrated into OpenSparc demonstrates a practical implementation.
An Identity Authentication Mechanism Based on Timing Covert Channel In the identity authentication, many advanced encryption techniques are applied to confirm and protect the user identity. Although the identity information is transmitted as cipher text in the Internet, the attackers can theft and fraud the identity by eavesdropping, cryptanalysis and forging. In this paper, a new identity authentication mechanism is proposed, which exploits the Timing Covert Channel (TCC) to transmit the identity information. TCC was originally a hacker technique to leak information under supervising, which uses the sending time of packets to indicate the information. In our method, the intervals between packets are applied to indicate the authentication tags. It is difficult for the attackers to eavesdrop, crack and forge the TCC identity, since the packets are too huge to analyze and the noise is different between the users and the attackers. A platform is designed to verify our proposed method. The experiment shows that the intervals and the thresholds are the key factors on the accuracy and efficiency. And it also proves our method is a secure way for identity information, which could be implanted on various network applications.
Robust compensation of a chattering time-varying input delay We investigate the design of a prediction-based controller for a linear system subject to a time-varying input delay, not necessarily causal. This means that the information feeding the system can be older than ones previously received. We propose to use the current delay value in the prediction employed in the control law. Modeling the input delay as a transport Partial Differential Equation, we prove asymptotic tracking of the system state, providing that the average ℒ2-norm of the delay time-derivative is sufficiently small. This result is obtained by generalizing Halanay inequality to time-varying differential inequalities.
A 12-Bit Dynamic Tracking Algorithm-Based SAR ADC With Real-Time QRS Detection A 12-bit successive approximation register (SAR) ADC based on dynamic tracking algorithm and a real-time QRS-detection algorithm are proposed. The dynamic tracking algorithm features two tracking windows which are adjacent to prediction interval. This algorithm is able to track down the input signal's variation range and automatically adjust the subrange interval and update prediction code. QRS-complex detection algorithm integrates synchronous time sequential ADC and realtime QRS-detector. The chip is fabricated in a standard 0.13 μm CMOS process with a 0.6 V supply. Measurement results show that proposed ADC exhibits 10.72 effective number of bit (ENOB) and 79.63 dB spur-free dynamic range (SFDR) at 10k Hz sample rate given 41.5 Hz sinusoid input. The DNL and INL are bounded at -0.6/0.62 LSB and -0.67/1.43 LSBs. The ADC achieves FoM of 48 fJ/conversion-step at the best case. Also, the prototype is experimented with ECG signal input and extracts the heart beat signal successfully.
1.05
0.05
0.05
0.05
0.05
0.05
0
0
0
0
0
0
0
0
On implementing omega with weak reliability and synchrony assumptions We study the feasibility and cost of implementing Ω---a fundamental failure detector at the core of many algorithms---in systems with weak reliability and synchrony assumptions. Intuitively, Ω allows processes to eventually elect a common leader. We first give an algorithm that implements Ω in a weak system S where processes are synchronous, but: (a) any number of them may crash, and (b) only the output links of an unknown correct process are eventually timely (all other links can be asynchronous and/or lossy). This is in contrast to previous implementations of Ω which assume that a quadratic number of links are eventually timely, or systems that are strong enough to implement the eventually perfect failure detector P. We next show that implementing Ω in S is expensive: even if we want an implementation that tolerates just one process crash, all correct processes (except possibly one) must send messages forever; moreover, a quadratic number of links must carry messages forever. We then show that with a small additional assumption---the existence of some unknown correct process whose asynchronous links are lossy but fair---we can implement Ω efficiently: we give an algorithm for Ω such that eventually only one process (the elected leader) sends messages.
Eventual Leader Election with Weak Assumptions on Initial Knowledge, Communication Reliability, and Synchrony This paper considers the eventual leader election problem in asynchronous message-passing systems where an arbitrary number t of processes can crash (t < n, where n is the total number of processes). It considers weak assumptions both on the initial knowledge of the processes and on the network behavior. More precisely, initially, a process knows only its identity and the fact that the process identities are difierent and totally ordered (it knows neither n nor t). Two eventual leader election protocols and a lower bound are presented. The flrst protocol assumes that a process also knows a lower bound fi on the number of processes that do not crash. This protocol requires the following behavioral properties from the underlying network: the graph made up of the correct processes and fair lossy links is strongly connected, and there is a correct process connected to (n ¡ f) ¡ fi other correct processes (where f is the actual number of crashes in the considered run) through eventually timely paths (paths made up of correct processes and eventually timely links). This protocol is not communication-e-cient in the sense that each correct process has to send messages forever. The second protocol is communication-e-cient: after some time, only the flnal common leader has to send messages forever. This protocol does not require the processes to know fi, but requires stronger properties from the underlying network: each pair of correct processes has to be connected by fair lossy links (one in each direction), and there is a correct process whose n ¡ f ¡ 1 output links to the rest of correct processes have to be eventually timely. A matching lower bound result shows that any eventual leader election protocol must have runs with this number of eventually timely links, even if all processes know all the processes identities. In addition to being communication-e-cient, the second protocol has another noteworthy e-ciency property, namely, be the run flnite or inflnite, all the local variables and message flelds have a flnite domain in the run.
Communication-efficient leader election and consensus with limited link synchrony We study the degree of synchrony required to implement the leader election failure detector Ω and to solve consensus in partially synchronous systems. We show that in a system with n processes and up to f process crashes, one can implement Ω and solve consensus provided there exists some (unknown) correct process with f outgoing links that are eventually timely. In the special case where f = 1 , an important case in practice, this implies that to implement Ω and solve consensus it is sufficient to have just one eventually timely link -- all the other links in the system, Θ(n2) of them, may be asynchronous. There is no need to know which link p → q is eventually timely, when it becomes timely, or what is its bound on message delay. Surprisingly, it is not even required that the source p or destination q of this link be correct: either p or q may actually crash, in which case the link p → q is eventually timely in a trivial way, and it is useless for sending messages. We show that these results are in a sense optimal: even if every process has f - 1 eventually timely links, neither Ω nor consensus can be solved. We also give an algorithm that implements Ω in systems where some correct process has f outgoing links that are eventually timely, such that eventually only f links carry messages, and we show that this is optimal. For f = 1 , this algorithm ensures that all the links, except for one, eventually become quiescent.
Time-free and timer-based assumptions can be combined to obtain eventual leadership Leader-based protocols rest on a primitive able to provide the processes with the same unique leader. Such protocols are very common in distributed computing to solve synchronization or coordination problems. Unfortunately, providing such a primitive is far from being trivial in asynchronous distributed systems prone to process crashes. (It is even impossible in fault-prone purely asynchronous systems.) To circumvent this difficulty, several protocols have been proposed that build a leader facility on top of an asynchronous distributed system enriched with additional assumptions. The protocols proposed so far consider either additional assumptions based on synchrony or additional assumptions on the pattern of the messages that are exchanged. Considering systems with n processes and up to f process crashes, 1lesf<n, this paper investigates the combination of a time-free assumption on the message pattern with a synchrony assumption on process speed and message delay. It shows that both types of assumptions can be combined to obtain a hybrid eventual leader protocol benefiting from the best of both worlds. This combined assumption considers a star communication structure involving f+1 processes. Its noteworthy feature lies in the level of combination of both types of assumption that is "as fine as possible" in the sense that each of the f channels of the star has to satisfy a property independently of the property satisfied by each of the f-1 other channels (the f channels do not have to satisfy the same assumption). More precisely, this combined assumption is the following: There is a correct process p (center of the star) and a set Q of f processes q (pnotinQ) such that, eventually, either 1) each time it broadcasts a query, q receives a response from p among the (n-f) first responses to that query, or 2) the channel from p to q is timely. (The processes in the set Q can crash.) A surprisingly simple eventual leader protocol based on this fine grain hybrid assump- - tion is proposed and proved correct. An improvement is also presented
Reliable MAC layer multicast in IEEE 802.11 wireless networks Multicast/broadcast is an important service primitive in networks. The IEEE 802.11 multicast/broadcast protocol is based on the basic access procedure of Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA). This protocol does not provide any media access control (MAC) layer recovery on multicast/broadcast frames. As a result, the reliability of the multicast/broadcast service is reduced due to the increased probability of lost frames resulting from interference or collisions. In this paper, we propose a reliable Batch Mode Multicast MAC protocol, BMMM, which substentially reduces the number of contention phases, thus considerably reduces the time required for a multicast/broadcast. We then propose a Location Aware Multicast MAC protocol, LAMM, that uses station location information to further improve upon BMMM. Extensive analysis and simulation results validate the reliability and efficiency of our multicast MAC protocols.
Stable Leader Election We introduce the notion of stable leader election and derive several algorithms for this problem. Roughly speaking, a leader election algorithm is stable if it ensures that once a leader is elected, it remains the leader for as long as it does not crash and its links have been behaving well, irrespective of the behavior of other processes and links. In addition to being stable, our leader election algorithms have several desirable properties. In particular, they are all communication-efficient,i.e., they eventually use only n links to carry messages, and they are robust, i.e., they work in systems where only the links to/from some correct process are required to be eventually timely. Moreover, our best leader election algorithm tolerates message losses, and it ensures that a leader is elected in constant time when the system is stable. We conclude the paper by applying the above ideas to derive a robust and efficient algorithm for the eventually perfect failure detector ! P.
Reaching Agreement in the Presence of Faults The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor.It is shown that the problem is solvable for, and only for, n ≥ 3m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0. This weaker assumption can be approximated in practice using cryptographic methods.
Information dissemination in highly dynamic graphs We investigate to what extent flooding and routing is possible if the graph is allowed to change unpredictably at each time step. We study what minimal requirements are necessary so that a node may correctly flood or route a message in a network whose links may change arbitrarily at any given point, subject to the condition that the underlying graph is connected. We look at algorithmic constraints such as limited storage, no knowledge of an upper bound on the number of nodes, and no usage of identifiers. We look at flooding as well as routing to some existing specified destination and give algorithms.
How to share a secret In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
COCA: A secure distributed online certification authority COCA is a fault-tolerant and secure on-line certification authority that has been built and deployed both in a local area network and in the Internet. Replication is used to achieve availability; proactive recovery with threshold cryptography is used for digitally signing certificates in a way that defends against mobile adversaries which attack, compromise, and control one replica for a limited period of time before moving on to another. Relatively weak assumptions characterize environments in which COCA''s protocols will execute correctly. No assumption is made about execution speed and message delivery delays; channels are expected to exhibit only intermittent reliability; and with 3t+1 COCA servers up to t may be faulty or compromised. The result is a system with inherent defenses to certain denial of service attacks because, by their very nature, weak assumptions are difficult for attackers to invalidate. In addition, traditional techniques, including request authorization, resource management based on segregation and scheduling different classes of requests, as well as caching results of expensive cryptographic operations further reduce COCA''s vulnerability to denial of service attacks. Results from experiments in a local area network and the Internet allow a quantitative evaluation of the various means COCA employs to resist denial of service attacks.
Exploiting availability prediction in distributed systems Loosely-coupled distributed systems have significant scale and cost advantages over more traditional architectures, but the availability of the nodes in these systems varies widely. Availability modeling is crucial for predicting per-machine resource burdens and understanding emergent, system-wide phenomena. We present new techniques for predicting availability and test them using traces taken from three distributed systems. We then describe three applications of availability prediction. The first, availability-guided replica placement, reduces object copying in a distributed data store while increasing data availability. The second shows how availability prediction can improve routing in delay-tolerant networks. The third combines availability prediction with virus modeling to improve forecasts of global infection dynamics.
A 41-phase switched-capacitor power converter with 3.8mV output ripple and 81% efficiency in baseline 90nm CMOS.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.019323
0.017896
0.016384
0.015439
0.014271
0.011028
0.003506
0.000176
0
0
0
0
0
0
A detailed power model for field-programmable gate arrays Power has become a critical issue for field-programmable gate array (FPGA) vendors. Understanding the power dissipation within FPGAs is the first step in developing power-efficient architectures and computer-aided design (CAD) tools for FPGAs. This article describes a detailed and flexible power model which has been integrated in the widely used Versatile Place and Route (VPR) CAD tool. This power model estimates the dynamic, short-circuit, and leakage power consumed by FPGAs. It is the first flexible power model developed to evaluate architectural tradeoffs and the efficiency of power-aware CAD tools for a variety of FPGA architectures, and is freely available for noncommercial use. The model is flexible, in that it can estimate the power for a wide variety of FPGA architectures, and it is fast, in that it does not require extensive simulation, meaning it can be used to explore a large architectural space. We show how the model can be used to investigate the impact of various architectural parameters on the energy consumed by the FPGA, focusing on the segment length, switch block topology, lookuptable size, and cluster size.
Flexible Circuits and Architectures for Ultralow Power Subthreshold digital circuits minimize energy per operation and are thus ideal for ultralow-power (ULP) applications with low performance requirements. However, a large range of ULP applications continue to face performance constraints at certain times that exceed the capabilities of subthreshold operation. In this paper, we give two different examples to show that designing flexibility into ULP systems across the architecture and circuit levels can meet both the ULP requirements and the performance demands. Specifically, we first present a method that expands on ultradynamic voltage scaling (UDVS) to combine multiple supply voltages with component level power switches to provide more efficient operation at any energy-delay point and low overhead switching between points. This system supports operation across the space from maximum performance, when necessary, to minimum energy, when possible. It thus combines the benefits of single-V DD, multi-V DD, and dynamic voltage scaling (DVS) while improving on them all. Second, we propose that reconfigurable subthreshold circuits can increase applicability for ULP embedded systems. Since ULP devices conventionally require custom circuit design but the manufacturing volume for many ULP applications is low, a subthreshold field programmable gate array (FPGA) offers a cost-effective custom solution with hardware flexibility that makes it applicable across a wide range of applications. We describe the design of a subthreshold FPGA to support ULP operation and identify key challenges to this effort.
Hardware acceleration of database operations As the amount of memory in database systems grows, entire database tables, or even databases, are able to fit in the system's memory, making in-memory database operations more prevalent. This shift from disk-based to in-memory database systems has contributed to a move from row-wise to columnar data storage. Furthermore, common database workloads have grown beyond online transaction processing (OLTP) to include online analytical processing and data mining. These workloads analyze huge datasets that are often irregular and not indexed, making traditional database operations like joins much more expensive. In this paper we explore using dedicated hardware to accelerate in-memory database operations. We present hardware to accelerate the selection process of compacting a single column into a linear column of selected data, joining two sorted columns via merging, and sorting a column. Finally, we put these primitives together to accelerate an entire join operation. We implement a prototype of this system using FPGAs and show substantial improvements in both absolute throughput and utilization of memory bandwidth. Using the prototype as a guide, we explore how the hardware resources required by our design change with the desired throughput.
The polyhedral model is more widely applicable than you think The polyhedral model is a powerful framework for automatic optimization and parallelization. It is based on an algebraic representation of programs, allowing to construct and search for complex sequences of optimizations. This model is now mature and reaches production compilers. The main limitation of the polyhedral model is known to be its restriction to statically predictable, loop-based program parts. This paper removes this limitation, allowing to operate on general data-dependent control-flow. We embed control and exit predicates as first-class citizens of the algebraic representation, from program analysis to code generation. Complementing previous (partial) attempts in this direction, our work concentrates on extending the code generation step and does not compromise the expressiveness of the model. We present experimental evidence that our extension is relevant for program optimization and parallelization, showing performance improvements on benchmarks that were thought to be out of reach of the polyhedral model.
Coarse grained reconfigurable architectures in the past 25 years: Overview and classification Reconfigurable architectures become more popular now general purpose compute performance does not increase as rapidly as before. Field programmable gate arrays are slowly moving into the direction of Coarse Grain Reconfigurable Architectures (CGRA) by adding DSP and other coarse grained IP blocks, general purpose processors become more heterogeneous and include sub-word parallelism and even some reconfigurable logic. In the past 25 years, several CGRAs have been published. In this paper an overview and classification of these architectures is presented. This work also provides a clear definition of CGRAs and identifies topics for future research which are key to unlock the full potential of CGRAs.
Hidden factors and hidden topics: understanding rating dimensions with review text In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as well as the user's level of interest in wizardry. User feedback is required to discover these latent product and user dimensions. Such feedback often comes in the form of a numeric rating accompanied by review text. However, traditional methods often discard review text, which makes user and product latent dimensions difficult to interpret, since they ignore the very text that justifies a user's rating. In this paper, we aim to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics (such as those learned by topic models like LDA). Our approach has several advantages. Firstly, we obtain highly interpretable textual labels for latent rating dimensions, which helps us to `justify' ratings with text. Secondly, our approach more accurately predicts product ratings by harnessing the information present in review text; this is especially true for new products and users, who may have too few ratings to model their latent factors, yet may still provide substantial information from the text of even a single review. Thirdly, our discovered topics can be used to facilitate other tasks such as automated genre discovery, and to identify useful and representative reviews.
Two Fast Algorithms for Sparse Matrices: Multiplication and Permuted Transposition Let A and B be two sparse matrices whose orders are p by q and q by r. Their product C -- AB requires N nontrlvial multiplications where 0 <_ N <_ pqr. The operation count of our algorithm is usually proportional to N; however, its worse case is O(p, r, NA, N) where NA is the number of elements in A This algorithm can be used to assemble the sparse matrix arising from a finite element problem from the basic elements, using ~-1 [order (g)]2 operations where m is the total number of basic elements and order(g) is the order of the ~th element matrix. The concept of an unordered merge plays a key role m obtaining our fast multiplication algorithm It forces us to accept an unordered sparse row-wise format as output for the product C The permuted transposition algorithm computes (RA) T in O(p, q, NA) operations where R is a permutation matrix It also orders an unordered sparse row-wise representation. We can combine these algorithms to produce an O(M) algorithm to solve Ax = b where M is the number of multiplications needed to factor A into LU
Hierarchical reconfigurable computing arrays for efficient CGRA-based embedded systems Coarse-grained reconfigurable architecture (CGRA) based embedded system aims at achieving high system performance with sufficient flexibility to map variety of applications. However, significant area and power consumption in the arrays prohibits its competitive advantage to be used as a processing core. In this work, we propose hierarchical reconfigurable computing array architecture to reduce power/area and enhance performance in configurable embedded system. The CGRA-based embedded systems that consist of hierarchical configurable computing arrays with varying size and communication speed were examined for multimedia and other applications. Experimental results show that the proposed approach reduces on-chip area by 22%, execution time by up to 72% and reduces power consumption by up to 55% when compared with the conventional CGRA-based architectures.
Deep learning Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech. Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. Machine-learning systems are used to identify objects in images, transcribe speech into text, match news items, posts or products with users' interests, and select relevant results of search. Increasingly, these applications make use of a class of techniques called deep learning. Conventional machine-learning techniques were limited in their ability to process natural data in their raw form. For decades, constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data (such as the pixel values of an image) into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input. Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. With the composition of enough such transformations, very complex functions can be learned. For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations. An image, for example, comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts. The key aspect of deep learning is that these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure. Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years. It has turned out to be very good at discovering intricate structures in high-dimensional data and is therefore applicable to many domains of science, business and government. In addition to beating records in image recognition1, 2, 3, 4 and speech recognition5, 6, 7, it has beaten other machine-learning techniques at predicting the activity of potential drug molecules8, analysing particle accelerator data9, 10, reconstructing brain circuits11, and predicting the effects of mutations in non-coding DNA on gene expression and disease12, 13. Perhaps more surprisingly, deep learning has produced extremely promising results for various tasks in natural language understanding14, particularly topic classification, sentiment analysis, question answering15 and language translation16, 17. We think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data. New learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress. The most common form of machine learning, deep or not, is supervised learning. Imagine that we want to build a system that can classify images as containing, say, a house, a car, a person or a pet. We first collect a large data set of images of houses, cars, people and pets, each labelled with its category. During training, the machine is shown an image and produces an output in the form of a vector of scores, one for each category. We want the desired category to have the highest score of all categories, but this is unlikely to happen before training. We compute an objective function that measures the error (or distance) between the output scores and the desired pattern of scores. The machine then modifies its internal adjustable parameters to reduce this error. These adjustable parameters, often called weights, are real numbers that can be seen as 'knobs' that define the input–output function of the machine. In a typical deep-learning system, there may be hundreds of millions of these adjustable weights, and hundreds of millions of labelled examples with which to train the machine. To properly adjust the weight vector, the learning algorithm computes a gradient vector that, for each weight, indicates by what amount the error would increase or decrease if the weight were increased by a tiny amount. The weight vector is then adjusted in the opposite direction to the gradient vector. The objective function, averaged over all the training examples, can be seen as a kind of hilly landscape in the high-dimensional space of weight values. The negative gradient vector indicates the direction of steepest descent in this landscape, taking it closer to a minimum, where the output error is low on average. In practice, most practitioners use a procedure called stochastic gradient descent (SGD). This consists of showing the input vector for a few examples, computing the outputs and the errors, computing the average gradient for those examples, and adjusting the weights accordingly. The process is repeated for many small sets of examples from the training set until the average of the objective function stops decreasing. It is called stochastic because each small set of examples gives a noisy estimate of the average gradient over all examples. This simple procedure usually finds a good set of weights surprisingly quickly when compared with far more elaborate optimization techniques18. After training, the performance of the system is measured on a different set of examples called a test set. This serves to test the generalization ability of the machine — its ability to produce sensible answers on new inputs that it has never seen during training. Many of the current practical applications of machine learning use linear classifiers on top of hand-engineered features. A two-class linear classifier computes a weighted sum of the feature vector components. If the weighted sum is above a threshold, the input is classified as belonging to a particular category. Since the 1960s we have known that linear classifiers can only carve their input space into very simple regions, namely half-spaces separated by a hyperplane19. But problems such as image and speech recognition require the input–output function to be insensitive to irrelevant variations of the input, such as variations in position, orientation or illumination of an object, or variations in the pitch or accent of speech, while being very sensitive to particular minute variations (for example, the difference between a white wolf and a breed of wolf-like white dog called a Samoyed). At the pixel level, images of two Samoyeds in different poses and in different environments may be very different from each other, whereas two images of a Samoyed and a wolf in the same position and on similar backgrounds may be very similar to each other. A linear classifier, or any other 'shallow' classifier operating on raw pixels could not possibly distinguish the latter two, while putting the former two in the same category. This is why shallow classifiers require a good feature extractor that solves the selectivity–invariance dilemma — one that produces representations that are selective to the aspects of the image that are important for discrimination, but that are invariant to irrelevant aspects such as the pose of the animal. To make classifiers more powerful, one can use generic non-linear features, as with kernel methods20, but generic features such as those arising with the Gaussian kernel do not allow the learner to generalize well far from the training examples21. The conventional option is to hand design good feature extractors, which requires a considerable amount of engineering skill and domain expertise. But this can all be avoided if good features can be learned automatically using a general-purpose learning procedure. This is the key advantage of deep learning. A deep-learning architecture is a multilayer stack of simple modules, all (or most) of which are subject to learning, and many of which compute non-linear input–output mappings. Each module in the stack transforms its input to increase both the selectivity and the invariance of the representation. With multiple non-linear layers, say a depth of 5 to 20, a system can implement extremely intricate functions of its inputs that are simultaneously sensitive to minute details — distinguishing Samoyeds from white wolves — and insensitive to large irrelevant variations such as the background, pose, lighting and surrounding objects. From the earliest days of pattern recognition22, 23, the aim of researchers has been to replace hand-engineered features with trainable multilayer networks, but despite its simplicity, the solution was not widely understood until the mid 1980s. As it turns out, multilayer architectures can be trained by simple stochastic gradient descent. As long as the modules are relatively smooth functions of their inputs and of their internal weights, one can compute gradients using the backpropagation procedure. The idea that this could be done, and that it worked, was discovered independently by several different groups during the 1970s and 1980s24, 25, 26, 27. The backpropagation procedure to compute the gradient of an objective function with respect to the weights of a multilayer stack of modules is nothing more than a practical application of the chain rule for derivatives. The key insight is that the derivative (or gradient) of the objective with respect to the input of a module can be computed by working backwards from the gradient with respect to the output of that module (or the input of the subsequent module) (Fig. 1). The backpropagation equation can be applied repeatedly to propagate gradients through all modules, starting from the output at the top (where the network produces its prediction) all the way to the bottom (where the external input is fed). Once these gradients have been computed, it is straightforward to compute the gradients with respect to the weights of each module. Many applications of deep learning use feedforward neural network architectures (Fig. 1), which learn to map a fixed-size input (for example, an image) to a fixed-size output (for example, a probability for each of several categories). To go from one layer to the next, a set of units compute a weighted sum of their inputs from the previous layer and pass the result through a non-linear function. At present, the most popular non-linear function is the rectified linear unit (ReLU), which is simply the half-wave rectifier f(z) = max(z, 0). In past decades, neural nets used smoother non-linearities, such as tanh(z) or 1/(1 + exp(−z)), but the ReLU typically learns much faster in networks with many layers, allowing training of a deep supervised network without unsupervised pre-training28. Units that are not in the input or output layer are conventionally called hidden units. The hidden layers can be seen as distorting the input in a non-linear way so that categories become linearly separable by the last layer (Fig. 1). In the late 1990s, neural nets and backpropagation were largely forsaken by the machine-learning community and ignored by the computer-vision and speech-recognition communities. It was widely thought that learning useful, multistage, feature extractors with little prior knowledge was infeasible. In particular, it was commonly thought that simple gradient descent would get trapped in poor local minima — weight configurations for which no small change would reduce the average error. In practice, poor local minima are rarely a problem with large networks. Regardless of the initial conditions, the system nearly always reaches solutions of very similar quality. Recent theoretical and empirical results strongly suggest that local minima are not a serious issue in general. Instead, the landscape is packed with a combinatorially large number of saddle points where the gradient is zero, and the surface curves up in most dimensions and curves down in the remainder29, 30. The analysis seems to show that saddle points with only a few downward curving directions are present in very large numbers, but almost all of them have very similar values of the objective function. Hence, it does not much matter which of these saddle points the algorithm gets stuck at. Interest in deep feedforward networks was revived around 2006 (refs 31,32,33,34) by a group of researchers brought together by the Canadian Institute for Advanced Research (CIFAR). The researchers introduced unsupervised learning procedures that could create layers of feature detectors without requiring labelled data. The objective in learning each layer of feature detectors was to be able to reconstruct or model the activities of feature detectors (or raw inputs) in the layer below. By 'pre-training' several layers of progressively more complex feature detectors using this reconstruction objective, the weights of a deep network could be initialized to sensible values. A final layer of output units could then be added to the top of the network and the whole deep system could be fine-tuned using standard backpropagation33, 34, 35. This worked remarkably well for recognizing handwritten digits or for detecting pedestrians, especially when the amount of labelled data was very limited36. The first major application of this pre-training approach was in speech recognition, and it was made possible by the advent of fast graphics processing units (GPUs) that were convenient to program37 and allowed researchers to train networks 10 or 20 times faster. In 2009, the approach was used to map short temporal windows of coefficients extracted from a sound wave to a set of probabilities for the various fragments of speech that might be represented by the frame in the centre of the window. It achieved record-breaking results on a standard speech recognition benchmark that used a small vocabulary38 and was quickly developed to give record-breaking results on a large vocabulary task39. By 2012, versions of the deep net from 2009 were being developed by many of the major speech groups6 and were already being deployed in Android phones. For smaller data sets, unsupervised pre-training helps to prevent overfitting40, leading to significantly better generalization when the number of labelled examples is small, or in a transfer setting where we have lots of examples for some 'source' tasks but very few for some 'target' tasks. Once deep learning had been rehabilitated, it turned out that the pre-training stage was only needed for small data sets. There was, however, one particular type of deep, feedforward network that was much easier to train and generalized much better than networks with full connectivity between adjacent layers. This was the convolutional neural network (ConvNet)41, 42. It achieved many practical successes during the period when neural networks were out of favour and it has recently been widely adopted by the computer-vision community. ConvNets are designed to process data that come in the form of multiple arrays, for example a colour image composed of three 2D arrays containing pixel intensities in the three colour channels. Many data modalities are in the form of multiple arrays: 1D for signals and sequences, including language; 2D for images or audio spectrograms; and 3D for video or volumetric images. There are four key ideas behind ConvNets that take advantage of the properties of natural signals: local connections, shared weights, pooling and the use of many layers. The architecture of a typical ConvNet (Fig. 2) is structured as a series of stages. The first few stages are composed of two types of layers: convolutional layers and pooling layers. Units in a convolutional layer are organized in feature maps, within which each unit is connected to local patches in the feature maps of the previous layer through a set of weights called a filter bank. The result of this local weighted sum is then passed through a non-linearity such as a ReLU. All units in a feature map share the same filter bank. Different feature maps in a layer use different filter banks. The reason for this architecture is twofold. First, in array data such as images, local groups of values are often highly correlated, forming distinctive local motifs that are easily detected. Second, the local statistics of images and other signals are invariant to location. In other words, if a motif can appear in one part of the image, it could appear anywhere, hence the idea of units at different locations sharing the same weights and detecting the same pattern in different parts of the array. Mathematically, the filtering operation performed by a feature map is a discrete convolution, hence the name. Although the role of the convolutional layer is to detect local conjunctions of features from the previous layer, the role of the pooling layer is to merge semantically similar features into one. Because the relative positions of the features forming a motif can vary somewhat, reliably detecting the motif can be done by coarse-graining the position of each feature. A typical pooling unit computes the maximum of a local patch of units in one feature map (or in a few feature maps). Neighbouring pooling units take input from patches that are shifted by more than one row or column, thereby reducing the dimension of the representation and creating an invariance to small shifts and distortions. Two or three stages of convolution, non-linearity and pooling are stacked, followed by more convolutional and fully-connected layers. Backpropagating gradients through a ConvNet is as simple as through a regular deep network, allowing all the weights in all the filter banks to be trained. Deep neural networks exploit the property that many natural signals are compositional hierarchies, in which higher-level features are obtained by composing lower-level ones. In images, local combinations of edges form motifs, motifs assemble into parts, and parts form objects. Similar hierarchies exist in speech and text from sounds to phones, phonemes, syllables, words and sentences. The pooling allows representations to vary very little when elements in the previous layer vary in position and appearance. The convolutional and pooling layers in ConvNets are directly inspired by the classic notions of simple cells and complex cells in visual neuroscience43, and the overall architecture is reminiscent of the LGN–V1–V2–V4–IT hierarchy in the visual cortex ventral pathway44. When ConvNet models and monkeys are shown the same picture, the activations of high-level units in the ConvNet explains half of the variance of random sets of 160 neurons in the monkey's inferotemporal cortex45. ConvNets have their roots in the neocognitron46, the architecture of which was somewhat similar, but did not have an end-to-end supervised-learning algorithm such as backpropagation. A primitive 1D ConvNet called a time-delay neural net was used for the recognition of phonemes and simple words47, 48. There have been numerous applications of convolutional networks going back to the early 1990s, starting with time-delay neural networks for speech recognition47 and document reading42. The document reading system used a ConvNet trained jointly with a probabilistic model that implemented language constraints. By the late 1990s this system was reading over 10% of all the cheques in the United States. A number of ConvNet-based optical character recognition and handwriting recognition systems were later deployed by Microsoft49. ConvNets were also experimented with in the early 1990s for object detection in natural images, including faces and hands50, 51, and for face recognition52. Since the early 2000s, ConvNets have been applied with great success to the detection, segmentation and recognition of objects and regions in images. These were all tasks in which labelled data was relatively abundant, such as traffic sign recognition53, the segmentation of biological images54 particularly for connectomics55, and the detection of faces, text, pedestrians and human bodies in natural images36, 50, 51, 56, 57, 58. A major recent practical success of ConvNets is face recognition59. Importantly, images can be labelled at the pixel level, which will have applications in technology, including autonomous mobile robots and self-driving cars60, 61. Companies such as Mobileye and NVIDIA are using such ConvNet-based methods in their upcoming vision systems for cars. Other applications gaining importance involve natural language understanding14 and speech recognition7. Despite these successes, ConvNets were largely forsaken by the mainstream computer-vision and machine-learning communities until the ImageNet competition in 2012. When deep convolutional networks were applied to a data set of about a million images from the web that contained 1,000 different classes, they achieved spectacular results, almost halving the error rates of the best competing approaches1. This success came from the efficient use of GPUs, ReLUs, a new regularization technique called dropout62, and techniques to generate more training examples by deforming the existing ones. This success has brought about a revolution in computer vision; ConvNets are now the dominant approach for almost all recognition and detection tasks4, 58, 59, 63, 64, 65 and approach human performance on some tasks. A recent stunning demonstration combines ConvNets and recurrent net modules for the generation of image captions (Fig. 3). Recent ConvNet architectures have 10 to 20 layers of ReLUs, hundreds of millions of weights, and billions of connections between units. Whereas training such large networks could have taken weeks only two years ago, progress in hardware, software and algorithm parallelization have reduced training times to a few hours. The performance of ConvNet-based vision systems has caused most major technology companies, including Google, Facebook, Microsoft, IBM, Yahoo!, Twitter and Adobe, as well as a quickly growing number of start-ups to initiate research and development projects and to deploy ConvNet-based image understanding products and services. ConvNets are easily amenable to efficient hardware implementations in chips or field-programmable gate arrays66, 67. A number of companies such as NVIDIA, Mobileye, Intel, Qualcomm and Samsung are developing ConvNet chips to enable real-time vision applications in smartphones, cameras, robots and self-driving cars. Deep-learning theory shows that deep nets have two different exponential advantages over classic learning algorithms that do not use distributed representations21. Both of these advantages arise from the power of composition and depend on the underlying data-generating distribution having an appropriate componential structure40. First, learning distributed representations enable generalization to new combinations of the values of learned features beyond those seen during training (for example, 2n combinations are possible with n binary features)68, 69. Second, composing layers of representation in a deep net brings the potential for another exponential advantage70 (exponential in the depth). The hidden layers of a multilayer neural network learn to represent the network's inputs in a way that makes it easy to predict the target outputs. This is nicely demonstrated by training a multilayer neural network to predict the next word in a sequence from a local context of earlier words71. Each word in the context is presented to the network as a one-of-N vector, that is, one component has a value of 1 and the rest are 0. In the first layer, each word creates a different pattern of activations, or word vectors (Fig. 4). In a language model, the other layers of the network learn to convert the input word vectors into an output word vector for the predicted next word, which can be used to predict the probability for any word in the vocabulary to appear as the next word. The network learns word vectors that contain many active components each of which can be interpreted as a separate feature of the word, as was first demonstrated27 in the context of learning distributed representations for symbols. These semantic features were not explicitly present in the input. They were discovered by the learning procedure as a good way of factorizing the structured relationships between the input and output symbols into multiple 'micro-rules'. Learning word vectors turned out to also work very well when the word sequences come from a large corpus of real text and the individual micro-rules are unreliable71. When trained to predict the next word in a news story, for example, the learned word vectors for Tuesday and Wednesday are very similar, as are the word vectors for Sweden and Norway. Such representations are called distributed representations because their elements (the features) are not mutually exclusive and their many configurations correspond to the variations seen in the observed data. These word vectors are composed of learned features that were not determined ahead of time by experts, but automatically discovered by the neural network. Vector representations of words learned from text are now very widely used in natural language applications14, 17, 72, 73, 74, 75, 76. The issue of representation lies at the heart of the debate between the logic-inspired and the neural-network-inspired paradigms for cognition. In the logic-inspired paradigm, an instance of a symbol is something for which the only property is that it is either identical or non-identical to other symbol instances. It has no internal structure that is relevant to its use; and to reason with symbols, they must be bound to the variables in judiciously chosen rules of inference. By contrast, neural networks just use big activity vectors, big weight matrices and scalar non-linearities to perform the type of fast 'intuitive' inference that underpins effortless commonsense reasoning. Before the introduction of neural language models71, the standard approach to statistical modelling of language did not exploit distributed representations: it was based on counting frequencies of occurrences of short symbol sequences of length up to N (called N-grams). The number of possible N-grams is on the order of VN, where V is the vocabulary size, so taking into account a context of more than a handful of words would require very large training corpora. N-grams treat each word as an atomic unit, so they cannot generalize across semantically related sequences of words, whereas neural language models can because they associate each word with a vector of real valued features, and semantically related words end up close to each other in that vector space (Fig. 4). When backpropagation was first introduced, its most exciting use was for training recurrent neural networks (RNNs). For tasks that involve sequential inputs, such as speech and language, it is often better to use RNNs (Fig. 5). RNNs process an input sequence one element at a time, maintaining in their hidden units a 'state vector' that implicitly contains information about the history of all the past elements of the sequence. When we consider the outputs of the hidden units at different discrete time steps as if they were the outputs of different neurons in a deep multilayer network (Fig. 5, right), it becomes clear how we can apply backpropagation to train RNNs. RNNs are very powerful dynamic systems, but training them has proved to be problematic because the backpropagated gradients either grow or shrink at each time step, so over many time steps they typically explode or vanish77, 78. Thanks to advances in their architecture79, 80 and ways of training them81, 82, RNNs have been found to be very good at predicting the next character in the text83 or the next word in a sequence75, but they can also be used for more complex tasks. For example, after reading an English sentence one word at a time, an English 'encoder' network can be trained so that the final state vector of its hidden units is a good representation of the thought expressed by the sentence. This thought vector can then be used as the initial hidden state of (or as extra input to) a jointly trained French 'decoder' network, which outputs a probability distribution for the first word of the French translation. If a particular first word is chosen from this distribution and provided as input to the decoder network it will then output a probability distribution for the second word of the translation and so on until a full stop is chosen17, 72, 76. Overall, this process generates sequences of French words according to a probability distribution that depends on the English sentence. This rather naive way of performing machine translation has quickly become competitive with the state-of-the-art, and this raises serious doubts about whether understanding a sentence requires anything like the internal symbolic expressions that are manipulated by using inference rules. It is more compatible with the view that everyday reasoning involves many simultaneous analogies that each contribute plausibility to a conclusion84, 85. Instead of translating the meaning of a French sentence into an English sentence, one can learn to 'translate' the meaning of an image into an English sentence (Fig. 3). The encoder here is a deep ConvNet that converts the pixels into an activity vector in its last hidden layer. The decoder is an RNN similar to the ones used for machine translation and neural language modelling. There has been a surge of interest in such systems recently (see examples mentioned in ref. 86). RNNs, once unfolded in time (Fig. 5), can be seen as very deep feedforward networks in which all the layers share the same weights. Although their main purpose is to learn long-term dependencies, theoretical and empirical evidence shows that it is difficult to learn to store information for very long78. To correct for that, one idea is to augment the network with an explicit memory. The first proposal of this kind is the long short-term memory (LSTM) networks that use special hidden units, the natural behaviour of which is to remember inputs for a long time79. A special unit called the memory cell acts like an accumulator or a gated leaky neuron: it has a connection to itself at the next time step that has a weight of one, so it copies its own real-valued state and accumulates the external signal, but this self-connection is multiplicatively gated by another unit that learns to decide when to clear the content of the memory. LSTM networks have subsequently proved to be more effective than conventional RNNs, especially when they have several layers for each time step87, enabling an entire speech recognition system that goes all the way from acoustics to the sequence of characters in the transcription. LSTM networks or related forms of gated units are also currently used for the encoder and decoder networks that perform so well at machine translation17, 72, 76. Over the past year, several authors have made different proposals to augment RNNs with a memory module. Proposals include the Neural Turing Machine in which the network is augmented by a 'tape-like' memory that the RNN can choose to read from or write to88, and memory networks, in which a regular network is augmented by a kind of associative memory89. Memory networks have yielded excellent performance on standard question-answering benchmarks. The memory is used to remember the story about which the network is later asked to answer questions. Beyond simple memorization, neural Turing machines and memory networks are being used for tasks that would normally require reasoning and symbol manipulation. Neural Turing machines can be taught 'algorithms'. Among other things, they can learn to output a sorted list of symbols when their input consists of an unsorted sequence in which each symbol is accompanied by a real value that indicates its priority in the list88. Memory networks can be trained to keep track of the state of the world in a setting similar to a text adventure game and after reading a story, they can answer questions that require complex inference90. In one test example, the network is shown a 15-sentence version of the The Lord of the Rings and correctly answers questions such as “where is Frodo now?”89. Unsupervised learning91, 92, 93, 94, 95, 96, 97, 98 had a catalytic effect in reviving interest in deep learning, but has since been overshadowed by the successes of purely supervised learning. Although we have not focused on it in this Review, we expect unsupervised learning to become far more important in the longer term. Human and animal learning is largely unsupervised: we discover the structure of the world by observing it, not by being told the name of every object. Human vision is an active process that sequentially samples the optic array in an intelligent, task-specific way using a small, high-resolution fovea with a large, low-resolution surround. We expect much of the future progress in vision to come from systems that are trained end-to-end and combine ConvNets with RNNs that use reinforcement learning to decide where to look. Systems combining deep learning and reinforcement learning are in their infancy, but they already outperform passive vision systems99 at classification tasks and produce impressive results in learning to play many different video games100. Natural language understanding is another area in which deep learning is poised to make a large impact over the next few years. We expect systems that use RNNs to understand sentences or whole documents will become much better when they learn strategies for selectively attending to one part at a time76, 86. Ultimately, major progress in artificial intelligence will come about through systems that combine representation learning with complex reasoning. Although deep learning and simple reasoning have been used for speech and handwriting recognition for a long time, new paradigms are needed to replace rule-based manipulation of symbolic expressions by operations on large vectors101. Download references The authors would like to thank the Natural Sciences and Engineering Research Council of Canada, the Canadian Institute For Advanced Research (CIFAR), the National Science Foundation and Office of Naval Research for support. Y.L. and Y.B. are CIFAR fellows. Reprints and permissions information is available at www.nature.com/reprints.
Language-based information-flow security Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.
An 8-bit 100-mhz cmos linear interpolation dac An 8-bit 100-MHz CMOS linear interpolation digital-to-analog converter (DAC) is presented. It applies a time-interleaved structure on an 8-bit binary-weighted DAC, using 16 evenly skewed clocks generated by a voltage-controlled delay line to realize the linear interpolation function. The linear interpolation increases the attenuation of the DAC&#39;s image components. The requirement for the analog re...
Dynamic adaptive virtual core mapping to improve power, energy, and performance in multi-socket multicores Consider a multithreaded parallel application running inside a multicore virtual machine context that is itself hosted on a multi-socket multicore physical machine. How should the VMM map virtual cores to physical cores? We compare a local mapping, which compacts virtual cores to processor sockets, and an interleaved mapping, which spreads them over the sockets. Simply choosing between these two mappings exposes clear tradeoffs between performance, energy, and power. We then describe the design, implementation, and evaluation of a system that automatically and dynamically chooses between the two mappings. The system consists of a set of efficient online VMM-based mechanisms and policies that (a) capture the relevant characteristics of memory reference behavior, (b) provide a policy and mechanism for configuring the mapping of virtual machine cores to physical cores that optimizes for power, energy, or performance, and (c) drive dynamic migrations of virtual cores among local physical cores based on the workload and the currently specified objective. Using these techniques we demonstrate that the performance of SPEC and PARSEC benchmarks can be increased by as much as 66%, energy reduced by as much as 31%, and power reduced by as much as 17%, depending on the optimization objective.
Decentralized adaptive tracking control for a class of interconnected nonlinear time-varying systems In this paper, aiming at output tracking, a decentralized adaptive backstepping control scheme is proposed for a class of interconnected nonlinear time-varying systems. By introducing a bound estimation approach and two smooth functions, the obstacle caused by unknown time-varying parameters and unknown interactions is circumvented and all signals of the overall closed-loop system are proved to be globally uniformly bounded, without any restriction on the parameters variation speed. Moreover, it is shown that the tracking errors can converge to predefined arbitrarily small residual sets with prescribed convergence rate and maximum overshoot, independent of the parameters variation speed and the strength of interactions. Simulation results performed on double inverted pendulums are presented to illustrate the effectiveness of the proposed scheme.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.101732
0.101732
0.101732
0.1
0.1
0.051667
0.02575
0.001667
0.000012
0
0
0
0
0
ZeNA: Zero-Aware Neural Network Accelerator. It has been observed that the majority of the kernel weights and input activations in the state-of-the-art convolution neural networks (CNNs) have zero values. This article proposes a CNN hardware accelerator that exploits this property to achieve significant performance and energy improvements.
Mixtures of Lightweight Deep Convolutional Neural Networks: Applied to Agricultural Robotics. We propose a novel approach for training deep convolutional neural networks (DCNNs) that allows us to tradeoff complexity and accuracy to learn lightweight models suitable for robotic platforms such as AgBot II (which performs automated weed management). Our approach consists of three stages, the first is to adapt a pre-trained model to the task at hand. This provides state-of-the-art performance ...
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
GoSPA: An Energy-efficient High-performance Globally Optimized SParse Convolutional Neural Network Accelerator The co-existence of activation sparsity and model sparsity in convolutional neural network (CNN) models makes sparsity-aware CNN hardware designs very attractive. The existing sparse CNN accelerators utilize intersection operation to search and identify the key positions of the matched entries between two sparse vectors, and hence avoid unnecessary computations. However, these state-of-the-art designs still suffer from three major architecture-level drawbacks, including 1) hardware cost for the intersection operation is high; 2) frequent stalls of computation phase due to strong data dependency between intersection and computation phases; and 3) unnecessary data transfer incurred by the explicit intersection operation.By leveraging the knowledge of the complete sparse 2-D convolution, this paper proposes two key ideas that overcome all of the three drawbacks. First, an implicit on-the-fly intersection is proposed to realize the optimal solution for intersection between one static stream and one dynamic stream, which is the case for sparse neural network inference. Second, by leveraging the global computation structure of 2-D convolution, we propose a specialized computation reordering to ensure that the activation is only transferred if necessary and only once.Based on these two key ideas, we develop GoSPA, an energy-efficient high-performance Globally Optimized SParse CNN Accelerator. GoSPA is implemented with CMOS 28nm technology. Compared with the state-of-the-art sparse CNN architecture, GoSPA achieves average 1.38×, 1.28×, 1.23×, 1.17×, 1.21× and 1.28× speedup on AlexNet, VGG, GoogLeNet, MobileNet, ResNet and ResNeXt workloads, respectively. Also, GoSPA achieves 5.38×, 4.96×, 4.79×, 5.02×, 4.86× and 2.06× energy efficiency improvement on AlexNet, VGG, GoogLeNet, MobileNet, ResNet and ResNeXt, respectively. In more comprehensive comparison including DRAM access, GoSPA also shows significant performance improvement over the existing designs.
Cambricon-S - Addressing Irregularity in Sparse Neural Networks through A Cooperative Software/Hardware Approach. Neural networks have become the dominant algorithms rapidly as they achieve state-of-the-art performance in a broad range of applications such as image recognition, speech recognition and natural language processing. However, neural networks keep moving towards deeper and larger architectures, posing a great challenge to the huge amount of data and computations. Although sparsity has emerged as an effective solution for reducing the intensity of computation and memory accesses directly, irregularity caused by sparsity (including sparse synapses and neurons) prevents accelerators from completely leveraging the benefits; it also introduces costly indexing module in accelerators. In this paper, we propose a cooperative software/hardware approach to address the irregularity of sparse neural networks efficiently. Initially, we observe the local convergence, namely larger weights tend to gather into small clusters during training. Based on that key observation, we propose a software-based coarse-grained pruning technique to reduce the irregularity of sparse synapses drastically. The coarse-grained pruning technique, together with local quantization, significantly reduces the size of indexes and improves the network compression ratio. We further design a hardware accelerator, Cambricon-S, to address the remaining irregularity of sparse synapses and neurons efficiently. The novel accelerator features a selector module to filter unnecessary synapses and neurons. Compared with a state-of-the-art sparse neural network accelerator, our accelerator is 1.71x and 1.37x better in terms of performance and energy efficiency, respectively.
Simba: Scaling Deep-Learning Inference with Multi-Chip-Module-Based Architecture Package-level integration using multi-chip-modules (MCMs) is a promising approach for building large-scale systems. Compared to a large monolithic die, an MCM combines many smaller chiplets into a larger system, substantially reducing fabrication and design costs. Current MCMs typically only contain a handful of coarse-grained large chiplets due to the high area, performance, and energy overheads associated with inter-chiplet communication. This work investigates and quantifies the costs and benefits of using MCMs with fine-grained chiplets for deep learning inference, an application area with large compute and on-chip storage requirements. To evaluate the approach, we architected, implemented, fabricated, and tested Simba, a 36-chiplet prototype MCM system for deep-learning inference. Each chiplet achieves 4 TOPS peak performance, and the 36-chiplet MCM package achieves up to 128 TOPS and up to 6.1 TOPS/W. The MCM is configurable to support a flexible mapping of DNN layers to the distributed compute and storage units. To mitigate inter-chiplet communication overheads, we introduce three tiling optimizations that improve data locality. These optimizations achieve up to 16% speedup compared to the baseline layer mapping. Our evaluation shows that Simba can process 1988 images/s running ResNet-50 with batch size of one, delivering inference latency of 0.50 ms.
TIE: energy-efficient tensor train-based inference engine for deep neural network In the era of artificial intelligence (AI), deep neural networks (DNNs) have emerged as the most important and powerful AI technique. However, large DNN models are both storage and computation intensive, posing significant challenges for adopting DNNs in resource-constrained scenarios. Thus, model compression becomes a crucial technique to ensure wide deployment of DNNs. This paper advances the state-of-the-art by considering tensor train (TT) decomposition, an very promising but yet explored compression technique in architecture domain. The method features with the extremely high compression ratio. However, the challenge is that the inference on the TT-format DNN models inherently incurs massive amount of redundant computations, causing significant energy consumption. Thus, the straightforward application of TT decomposition is not feasible. To address this fundamental challenge, this paper develops a computation-efficient inference scheme for TT-format DNN, which enjoys two key merits: 1) it achieves theoretical limit of number of multiplications, thus eliminating all redundant computations; and 2) the multi-stage processing scheme reduces the intensive memory access to all tensor cores, bringing significant energy saving. Based on the novel inference scheme, we develop TIE, a TT-format compressed DNN-targeted inference engine. TIE is highly flexible, supporting different types of networks for different needs. A 16-processing elements (PE) prototype is implemented using CMOS 28nm technology. Operating on 1000MHz, the TIE accelerator consumes 1.74mm2 and 154.8mW. Compared with EIE, TIE achieves 7.22× ~ 10.66× better area efficiency and 3.03× ~ 4.48× better energy efficiency on different workloads, respectively. Compared with CirCNN, TIE achieves 5.96× and 4.56× higher throughput and energy efficiency, respectively. The results show that TIE exhibits significant advantages over state-of-the-art solutions.
GRAM - graph processing in a ReRAM-based computational memory. The performance of graph processing for real-world graphs is limited by inefficient memory behaviours in traditional systems because of random memory access patterns. Offloading computations to the memory is a promising strategy to overcome such challenges. In this paper, we exploit the resistive memory (ReRAM) based processing-in-memory (PIM) technology to accelerate graph applications. The proposed solution, GRAM, can efficiently executes vertex-centric model, which is widely used in large-scale parallel graph processing programs, in the computational memory. The hardware-software co-design used in GRAM maximizes the computation parallelism while minimizing the number of data movements. Based on our experiments with three important graph kernels on seven real-world graphs, GRAM provides 122.5X and 11.1x speedup compared with an in-memory graph system and optimized multithreading algorithms running on a multi-core CPU. Compared to a GPU-based graph acceleration library and a recently proposed PIM accelerator, GRAM improves the performance by 7.1X and 3.8X respectively.
Concurrent Data Structures for Near-Memory Computing. The performance gap between memory and CPU has grown exponentially. To bridge this gap, hardware architects have proposed near-memory computing (also called processing-in-memory, or PIM), where a lightweight processor (called a PIM core) is located close to memory. Due to its proximity to memory, a memory access from a PIM core is much faster than that from a CPU core. New advances in 3D integration and die-stacked memory make PIM viable in the near future. Prior work has shown significant performance improvements by using PIM for embarrassingly parallel and data-intensive applications, as well as for pointer-chasing traversals in sequential data structures. However, current server machines have hundreds of cores, and algorithms for concurrent data structures exploit these cores to achieve high throughput and scalability, with significant benefits over sequential data structures. Thus, it is important to examine how PIM performs with respect to modern concurrent data structures and understand how concurrent data structures can be developed to take advantage of PIM. This paper is the first to examine the design of concurrent data structures for PIM. We show two main results: (1) naive PIM data structures cannot outperform state-of-the-art concurrent data structures, such as pointer-chasing data structures and FIFO queues, (2) novel designs for PIM data structures, using techniques such as combining, partitioning and pipelining, can outperform traditional concurrent data structures, with a significantly simpler design.
The geometry of innocent flesh on the bone: return-into-libc without function calls (on the x86) We present new techniques that allow a return-into-libc attack to be mounted on x86 executables that calls no functions at all. Our attack combines a large number of short instruction sequences to build gadgets that allow arbitrary computation. We show how to discover such instruction sequences by means of static analysis. We make use, in an essential way, of the properties of the x86 instruction set.
Map construction and exploration by mobile agents scattered in a dangerous network We consider the map construction problem in a simple, connected graph by a set of mobile computation entities or agents that start from scattered locations throughout the graph. The problem is further complicated by dangerous elements, nodes and links, in the graph that eliminate agents traversing or arriving at them. The agents working in the graph communicate using a limited amount of storage at each node and work asynchronously. We present a deterministic algorithm that solves the exploration and map construction problems. The end result is also a rooted spanning tree and the election of a leader. The total cost of the algorithm is O(ns m) total number of moves, where m is the number of links in the network and ns is the number of safe nodes, improving the existing O(m2) bound.
Interval observers for linear time-invariant systems with disturbances It is shown that, for any time-invariant exponentially stable linear system with additive disturbances, time-varying exponentially stable interval observers can be constructed. The technique of construction relies on the Jordan canonical form that any real matrix admits and on time-varying changes of coordinates for elementary Jordan blocks which lead to cooperative linear systems. The approach is applied to detectable linear systems.
Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition Deep-learning neural networks such as convolutional neural network (CNN) have shown great potential as a solution for difficult vision problems, such as object recognition. Spiking neural networks (SNN)-based architectures have shown great potential as a solution for realizing ultra-low power consumption using spike-based neuromorphic hardware. This work describes a novel approach for converting a deep CNN into a SNN that enables mapping CNN to spike-based hardware architectures. Our approach first tailors the CNN architecture to fit the requirements of SNN, then trains the tailored CNN in the same way as one would with CNN, and finally applies the learned network weights to an SNN architecture derived from the tailored CNN. We evaluate the resulting SNN on publicly available Defense Advanced Research Projects Agency (DARPA) Neovision2 Tower and CIFAR-10 datasets and show similar object recognition accuracy as the original CNN. Our SNN implementation is amenable to direct mapping to spike-based neuromorphic hardware, such as the ones being developed under the DARPA SyNAPSE program. Our hardware mapping analysis suggests that SNN implementation on such spike-based hardware is two orders of magnitude more energy-efficient than the original CNN implementation on off-the-shelf FPGA-based hardware.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.102096
0.1
0.1
0.1
0.034845
0.014857
0.002
0.000474
0.000052
0
0
0
0
0
Improved delay-dependent stability criteria for time-delay systems This note provides an improved asymptotic stability condi- tion for time-delay systems in terms of a strict linear matrix inequality. Unlike previous methods, the mathematical development avoids bounding certain cross terms which often leads to conservatism. When time-varying norm-bounded uncertainties appear in a delay system, an improved robust delay-dependent stability condition is also given. Examples are provided to demonstrate the reduced conservatism of the proposed conditions. Index Terms—Delay-dependent condition, linear matrix inequality (LMI), time-delay systems, uncertain systems.
Input-to-State Stability for Networked Predictive Control With Random Delays in Both Feedback and Forward Channels. The input-to-state stability (ISS) for a class of networked control systems with random delays and packet dropouts appearing simultaneously in both feedback and forward channels is thoroughly investigated in this paper. A new network predictive controller scheme is introduced in order to compensate the effect of transmission delays and packet dropouts. By making use of the small gain theorem, the ...
Output tracking control of networked control systems via delay compensation controllers In this paper, the problem of networked output tracking control is investigated by considering the delay compensations in both the feedback and forward channels in networked control systems. The delayed output measurements are treated as a special output disturbance, and the feedback channel delay is compensated with the aid of an extended functional observer. For the delay in the forward channel, the buffer and packet-based delay compensation approaches are presented, respectively. Then, the stability analysis is performed for the networked closed-loop systems. Finally, a servo motor control system is used to demonstrate the effectiveness of the proposed new design scheme.
Input delay compensation of linear systems with both state and input delays by adding integrators This paper studies stabilization of linear systems with both state and input delays. A dynamic input-delay compensator obtained by adding integrators is established to compensate the input delays that can be arbitrarily large. With the input delay compensator, the original stabilization problem reduces to the problem of stabilizing an augmented linear time-delay system without input delay. Three methods are also proposed to design stabilizing controllers for the augmented linear time-delay system. The first method is based on linear matrix inequalities (LMIs) and the second method is based on model reduction. The third method is based on pole placement and is built for the particular case that the original time-delay system has only a pure delayed state vector on its right hand side. For this method, the optimal gain such that the decay rate of the closed-loop system is maximized is also proposed. The effectiveness of the proposed approaches is illustrated by three linear time-delay systems that are open-loop unstable.
Input Delay Compensation for Forward Complete and Strict-Feedforward Nonlinear Systems We present an approach for compensating input delay of arbitrary length in nonlinear control systems. This approach, which due to the infinite dimensionality of the actuator dynamics and due to the nonlinear character of the plant results in a nonlinear feedback operator, is essentially a nonlinear version of the Smith predictor and its various predictor-based modifications for linear plants. Global stabilization in the presence of arbitrarily long delay is achieved for all nonlinear plants that are globally stabilizable in the absence of delay and that satisfy the property of forward completeness (which is satisfied by most mechanical systems, electromechanical systems, vehicles, and other physical systems). For strict-feedforward systems, one obtains the predictor-based feedback law explicitly. For the linearizable subclass of strict-feedforward systems, closed-loop solutions are also obtained explicitly. The feedback designs are illustrated through two detailed examples.
Finite spectrum assignment of unstable time-delay systems with a safe implementation The instability mechanisms, related to the implementation of distributed delay controllers in the context of finite spectrum assignment, were studied in detail in the past few years. In this note we introduce a distributed delay control law that assigns a finite closed-loop spectrum and whose implementation with a sum of point-wise delays is safe. This property is obtained by implicitly including a low-pass filter in the control loop. This leads to a closed-loop characteristic quasipolynomial of retarded type, and not one of neutral type, which was shown to be a cause of instability in previous schemes.
Cascade High Gain Predictors for a Class of Nonlinear Systems This work presents a set of cascade high gain predictors to reconstruct the vector state of triangular nonlinear systems with delayed output. By using a Lyapunov-Krasvoskii approach, simple sufficient conditions ensuring the exponential convergence of the observation error towards zero are given. All predictors used in the cascade have the same structure. This feature will greatly improve the easiness of their implementation. This result is illustrated by some simulations.
A New Approach to the Internally Positive Representation of Linear MIMO Systems The problem of representing linear systems through combination of positive systems is relevant when signal processing schemes, such as filters, state observers, or control laws, are to be implemented using “positive” technologies, such as Charge Routing Networks and fiber optic filters. This problem, well investigated in the SISO case, can be recasted into the more general problem of Internally Positive Representation (IPR) of systems. This paper presents a methodology for the construction of such IPRs for MIMO systems, based on a suitable convex positive representation of complex vectors and matrices. The stability properties of the IPRs are investigated in depth, achieving the important result that any stable system admits a stable IPR of finite dimension. A main algorithm and three variants, all based on the proposed methodology, are presented for the construction of stable IPRs. All of them are straightforward and are characterized by a very low computational cost. The first and second may require a large state-space dimension to provide a stable IPR, while the third and the fourth are aimed to provide stable IPRs of reduced order.
The Emergence of Intelligent Enterprises: From CPS to CPSS When IEEE Intelligent Systems solicited ideas for a new department, cyberphysical systems(CPS) received overwhelming support.Cyber-Physical-Social Systems is the new name for CPS. CPSS is the enabling platform technology that will lead us to an era of intelligent enterprises and industries. Internet use and cyberspace activities have created an overwhelming demand for the rapid development and application of CPSS. CPSS must be conducted with a multidisciplinary approach involving the physical, social, and cognitive sciences and that Al-based intelligent systems will be key to any successful construction and deployment.
Design-oriented estimation of thermal noise in switched-capacitor circuits. Thermal noise represents a major limitation on the performance of most electronic circuits. It is particularly important in switched circuits, such as the switched-capacitor (SC) filters widely used in mixed-mode CMOS integrated circuits. In these circuits, switching introduces a boost in the power spectral density of the thermal noise due to aliasing. Unfortunately, even though the theory of nois...
Variability in TCP round-trip times We measured and analyzed the variability in round trip times (RTTs) within TCP connections using passive measurement techniques. We collected eight hours of bidirectional traces containing over 22 million TCP connections between end-points at a large university campus and almost $1$ million remote locations. Of these, we used over 1 million TCP connections that yield 10 or more valid RTT samples, to examine RTT variability within a TCP connection. Our results indicate that contrary to observations in several previous studies, RTT values within a connection vary widely. Our results have implications for designing better simulation models, and understanding how round trip times affect the dynamic behavior and throughput of TCP connections.
A high efficiency and compact size 65nm power management module with 1.2v low-voltage PWM controller for UWB system application
Current-mode adaptively hysteretic control for buck converters with fast transient response and improved output regulation This paper presents a current-mode adaptively hysteretic control (CMAHC) technique to achieve the fast transient response for DC-DC buck converters. A complementary full range current sensor comprising of chargingpath and discharging-path sensing transistors is proposed to track the inductor current seamlessly. With the proposed current-mode adaptively hysteretic topology, the inductor current is continuously monitored, and the adaptively hysteretic threshold is dynamically adjusted according to the feedback information comes from the output voltage level. Therefore, a fast load-transient response can be achieved. Besides, the output regulation performance is also improved by the proposed dynamic current-scaling circuitry (DCSC). Moreover, the proposed CMAHC topology can be used in a nearly zero RESR design configuration. The prototype fabricated using TSMC 0.25μm CMOS process occupies the area of 1.78mm2 including all bonding pads. Experimental results show that the output voltage ripple is smaller than 30mV over a wide loading current from 0 mA to 500 mA with maximum power conversion efficiency higher than 90%. The recovery time from light to heavy load (100 to 500 mA) is smaller than 5μs.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.103896
0.052181
0.052181
0.028393
0.022207
0.007546
0.001399
0.000146
0
0
0
0
0
0
Memory safety without garbage collection for embedded applications Traditional approaches to enforcing memory safety of programs rely heavily on run-time checks of memory accesses and on garbage collection, both of which are unattractive for embedded applications. The goal of our work is to develop advanced compiler techniques for enforcing memory safety with minimal run-time overheads. In this paper, we describe a set of compiler techniques that, together with minor semantic restrictions on C programs and no new syntax, ensure memory safety and provide most of the error-detection capabilities of type-safe languages, without using garbage collection, and with no run-time software checks, (on systems with standard hardware support for memory management). The language permits arbitrary pointer-based data structures, explicit deallocation of dynamically allocated memory, and restricted array operations. One of the key results of this paper is a compiler technique that ensures that dereferencing dangling pointers to freed memory does not violate memory safety, without annotations, run-time checks, or garbage collection, and works for arbitrary type-safe C programs. Furthermore, we present a new interprocedural analysis for static array bounds checking under certain assumptions. For a diverse set of embedded C programs, we show that we are able to ensure memory safety of pointer and dynamic memory usage in all these programs with no run-time software checks (on systems with standard hardware memory protection), requiring only minor restructuring to conform to simple type restrictions. Static array bounds checking fails for roughly half the programs we study due to complex array references, and these are the only cases where explicit run-time software checks would be needed under our language and system assumptions.
Data Space Randomization Over the past several years, US-CERT advisories, as well as most critical updates from software vendors, have been due to memory corruption vulnerabilities such as buffer overflows, heap overflows, etc. Several techniques have been developed to defend against the exploitation of these vulnerabilities, with the most promising defenses being based on randomization. Two randomization techniques have been explored so far: address space randomization (ASR) that randomizes the location of objects in virtual memory, and instruction set randomization (ISR) that randomizes the representation of code. We explore a third form of randomization called data space randomization (DSR) that randomizes the representation of data stored in program memory. Unlike ISR, DSR is effective against non-control data attacks as well as code injection attacks. Unlike ASR, it can protect against corruption of non-pointer data as well as pointer-valued data. Moreover, DSR provides a much higher range of randomization (typically 232 for 32-bit data) as compared to ASR. Other interesting aspects of DSR include (a) it does not share a weakness common to randomization-based defenses, namely, susceptibility to information leakage attacks, and (b) it is capable of detecting some exploits that are missed by full bounds-checking techniques, e.g., some of the overflows from one field of a structure to the next field. Our implementation results show that with appropriate design choices, DSR can achieve a performance overhead in the range of 5% to 30% for a range of programs.
Precise garbage collection for C Magpie is a source-to-source transformation for C programs that enables precise garbage collection, where precise means that integers are not confused with pointers, and the liveness of a pointer is apparent at the source level. Precise GC is primarily useful for long-running programs and programs that interact with untrusted components. In particular, we have successfully deployed precise GC in the C implementation of a language run-time system that was originally designed to use conservative GC. We also report on our experience in transforming parts of the Linux kernel to use precise GC instead of manual memory management.
SVF: interprocedural static value-flow analysis in LLVM. This paper presents SVF, a tool that enables scalable and precise interprocedural Static Value-Flow analysis for C programs by leveraging recent advances in sparse analysis. SVF, which is fully implemented in LLVM, allows value-flow construction and pointer analysis to be performed in an iterative manner, thereby providing increasingly improved precision for both. SVF accepts points- to information generated by any pointer analysis (e.g., Andersen’s analysis) and constructs an interprocedural memory SSA form, in which the def-use chains of both top-level and address-taken variables are captured. Such value-flows can be subsequently exploited to support various forms of program analysis or enable more precise pointer analysis (e.g., flow-sensitive analysis) to be performed sparsely. By dividing a pointer analysis into three loosely coupled components: Graph, Rules and Solver, SVF provides an extensible interface for users to write their own solutions easily. SVF is publicly available at http://unsw-corg.github.io/SVF.
MarkUs: Drop-in use-after-free prevention for low-level languages Use-after-free vulnerabilities have plagued software written in low-level languages, such as C and C++, becoming one of the most frequent classes of exploited software bugs. Attackers identify code paths where data is manually freed by the programmer, but later incorrectly reused, and take advantage by reallocating the data to themselves. They then alter the data behind the program's back, using the erroneous reuse to gain control of the application and, potentially, the system. While a variety of techniques have been developed to deal with these vulnerabilities, they often have unacceptably high performance or memory overheads, especially in the worst case.We have designed MarkUs, a memory allocator that prevents this form of attack at low overhead, sufficient for deployment in real software, even under allocation- and memory-intensive scenarios. We prevent use-after-free attacks by quarantining data freed by the programmer and forbidding its reallocation until we are sure that there are no dangling pointers targeting it. To identify these we traverse live-objects accessible from registers and memory, marking those we encounter, to check whether quarantined data is accessible from any currently allocated location. Unlike garbage collection, which is unsafe in C and C++, MarkUs ensures safety by only freeing data that is both quarantined by the programmer and has no identifiable dangling pointers. The information provided by the programmer's allocations and frees further allows us to optimise the process by freeing physical addresses early for large objects, specialising analysis for small objects, and only performing marking when sufficient data is in quarantine. Using MarkUs, we reduce the overheads of temporal safety in low-level languages to 1.1× on average for SPEC CPU2006, with a maximum slowdown of only 2×, vastly improving upon the state-of-the-art.
Mitigating data leakage by protecting memory-resident sensitive data Gaining reliable arbitrary code execution through the exploitation of memory corruption vulnerabilities is becoming increasingly more difficult in the face of modern exploit mitigations. Facing this challenge, adversaries have started shifting their attention to data leakage attacks, which can lead to equally damaging outcomes, such as the disclosure of private keys or other sensitive data. In this work, we present a compiler-level defense against data leakage attacks for user-space applications. Our approach strikes a balance between the manual effort required to protect sensitive application data, and the performance overhead of achieving strong data confidentiality. To that end, we require developers to simply annotate those variables holding sensitive data, after which our framework automatically transforms only the fraction of the entire program code that is related to sensitive data operations. We implemented this approach by extending the LLVM compiler, and used it to protect memory-resident private keys in the MbedTLS server, ssh-agent, and a Libsodium-based file signing program, as well as user passwords for Lighttpd and Memcached. Our results demonstrate the feasibility and practicality of our technique: a modest runtime overhead (e.g., 13% throughput reduction for MbedTLS) that is on par with, or better than, existing state-of-the-art memory safety approaches for selective data protection.
Control-flow integrity principles, implementations, and applications Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, control-flow integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: It is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.
Differential Power Analysis . Cryptosystem designers frequently assume that secrets willbe manipulated in closed, reliable computing environments. Unfortunately,actual computers and microchips leak information about the operationsthey process. This paper examines specific methods for analyzingpower consumption measurements to find secret keys from tamperresistant devices. We also discuss approaches for building cryptosystemsthat can operate securely in existing hardware that leaks information.Keywords:...
Leveraging on-chip voltage regulators as a countermeasure against side-channel attacks Side-channel attacks have become a significant threat to the integrated circuit security. Circuit level techniques are proposed in this paper as a countermeasure against side-channel attacks. A distributed on-chip power delivery system consisting of multi-level switched capacitor (SC) voltage converters is proposed where the individual interleaved stages are turned on and turned off either based on the workload information or pseudo-randomly to scramble the power consumption profile. In the case that the changes in the workload demand do not trigger the power delivery system to turn on or off individual stages, the active stages are reshuffled with so called converter-reshuffling to insert random spikes in the power consumption profile. An entropy based metric is developed to evaluate the security-performance of the proposed converter-reshuffling technique as compared to three other existing on-chip power delivery schemes. The increase in the power trace entropy with CoRe scheme is also demonstrated with simulation results to further verify the theoretical analysis.
The Impact of Data Aggregation in Wireless Sensor Networks Sensor networks are distributed event-based systems that differ from traditional communication networks in several ways: sensor networks have severe energy constraints, redundant low-rate data, and many-to-one flows. Data-centric mechanisms that perform in-network aggregation of data are needed in this setting for energy-efficient information flow. In this paper we model data-centric routing and compare its performance with traditional end-to-endrouting schemes. We examine the impact of source-destination placement and communication network density on the energy costs and delay associated with data aggregation. We show that data-centric routing offers significant performance gains across a wide range of operational scenarios. We also examine the complexity of optimal data aggregation, showing that although it is an NP-hard problem in general, there exist useful polynomial-time special cases.
Pinning a complex dynamical network to its equilibrium It is now known that the complexity of network topology has a great impact on the stabilization of complex dynamical networks. In this work, we study the control of random networks and scale-free networks. Conditions are investigated for globally or locally stabilizing such networks. Our strategy is to apply local feedback control to a small fraction of network nodes. We propose the concept of virtual control for microscopic dynamics throughout the process with different pinning schemes for both random networks and scale-free networks. We explain the main reason why significantly less local controllers are required by specifically pinning the most highly connected nodes in a scale-free network than those required by the randomly pinning scheme, and why there is no significant difference between specifically and randomly pinning schemes for controlling random dynamical networks. We also study the synchronization phenomenon of controlled dynamical networks in the stabilization process, both analytically and numerically.
A proposal of architectural elements for implementing secure software download service in software defined radio In order to obtain an appropriate, high level of security, a number of architectural elements for secure downloading of software to a software defined radio (SDR) terminal have been pointed out. They include four different cryptographic techniques and employment of tamper resistant hardware. The cryptographic techniques employed are: (a) a secret key encryption technique; (b) a public key encryption technique; (c) a technique for cryptographic hashing and (d) a technique for digital signature. Particularly, a protocol for exchanging cryptographic components in an automatic manner without any assistance from the user, is proposed. Implementation characteristics of certain cryptographic components are also discussed.
The real-time segmentation of indoor scene based on RGB-D sensor The vision system of the mobile robot is a low-level function that provides the required target information of the current environment for the upper vision tasks. The real-time performance and robustness of object segmentation in cluttered environments is still a serious problem in robot visions. In this paper, a new real-time indoor scene segmentation method based on RGB-D image, is presented and the extracted primary object regions are then used for object recognition. Firstly, this paper accomplishes the depth filtering by the improved traditional filtering method. Then by using improved depth information, the algorithm extracts the foreground and implements the object segmentation of color image at the resolution of 640×480 from Kinect camera. Finally, the segmentation results are applied into the object recognition in indoor scene to validate the effectiveness of scene segmentation. The results of indoor segmentation demonstrate the real-time performance and robustness of the proposed method. In addition, the results of segmentation improve the accuracy of object recognition and reduce time of object recognition in indoor cluttered scene.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1.0699
0.066667
0.066667
0.066667
0.066667
0.033333
0.002955
0.000007
0
0
0
0
0
0
A 0.003 mm 10 b 240 MS/s 0.7 mW SAR ADC in 28 nm CMOS With Digital Error Correction and Correlated-Reversed Switching This paper describes a single-channel, calibration-free Successive-Approximation-Register (SAR) ADC with a resolution of 10 bits at 240 MS/s. A DAC switching technique and an addition-only digital error correction technique based on the non-binary search are proposed to tackle the static and dynamic non-idealities attributed to capacitor mismatch and insufficient DAC settling. The conversion speed is enhanced, and the power and area of the DAC are also reduced by 40% as a result. In addition, a switching scheme lifting the input common mode of the comparator is proposed to further enhance the speed. Moreover, the comparator employs multiple feedback paths for an enhanced regeneration strength to alleviate the metastable problem. Occupying an active area of 0.003 mm and dissipating 0.68 mW from 1 V supply at 240 MS/s in 28 nm CMOS, the proposed design achieves an SNDR of 57 dB with low-frequency inputs and 53 dB at the Nyquist input. This corresponds to a conversion efficiency of 4.8 fJ/c.-s. and 7.8 fJ/c.-s. respectively. The DAC switching technique improves the INL and DNL from +1.15/-1.01 LSB and +0.92/-0.28 LSB to within +0.55/-0.45 LSB and +0.45/-0.23 LSB, respectively. This ADC is at least 80% smaller and 32% more power efficient than reported state-of-the-art ADCs of similar resolutions and Nyquist bandwidths larger than 75 MHz.
Design Considerations of Ultralow-Voltage Self-Calibrated SAR ADC This brief presents a 0.5-V 11-bit successive approximation register analog-to-digital converter (ADC) with a focus on self-calibration at a low supply voltage. The relationships among the noise of comparators, the resolution of a calibration digitalto-analog converter (DAC), and the overall ADC performance are studied. Analysis shows that the nonlinearity of a calibration DAC and a coupling capacitor has an insignificant effect. An ultralow-leakage switch is also described, and an improved process of measuring mismatch is proposed to alleviate the charge injection of a sampling switch. Fabricated in the 0.13-μm CMOS with an active area of 0.868 mm2, the ADC achieves a signal-to-noise-plus-distortion ratio (SNDR) of 62.12 dB and a spurious-free dynamic range of 73.03 dB at a 500-kS/s sampling rate. The power consumption is 39.9 μW.
A 1-V 9.8-ENOB 100-kS/s single-ended SAR ADC with symmetrical DAC switching technique for neural signal acquisition This paper reports a high-performance low-power and area-efficient single-ended SAR ADC for neural signal acquisition. The proposed 10-bit ADC features a novel symmetrical DAC switching technique that resolves the signal-dependent comparator offset voltage problem in conventional single-ended SAR ADCs, and improves the ADC's ENOB. Combined with an existing LSB single-sided switching method, the proposed switching scheme reduces DAC switching energy by 92% and capacitor array area by 50%. Besides, the proposed ADC also eliminates the need for any power consuming Vcm generation circuit, making it more suitable for low-power System-on-Chip (SoC) integration. The 10-bit prototype ADC is fabricated in a standard 0.18-um CMOS technology. Operating at 1.0 V power supply and 100 kS/s, the proposed ADC achieves 58.83 dB SNDR and 63.6 dB SFDR for a 49.06 kHz input signal. The maximum ENOB is 9.8-bit for low frequency input signal; and the minimum ENOB is 9.48-bit at the Nyquist input frequency. The average power consumption is 1.72 μW and the fig re-of-merit (FoM) is 24.1 fJ/conversion-step.
Implementation of Low-Power 6-8 b 30-90 GS/s Time-Interleaved ADCs With Optimized Input Bandwidth in 32 nm CMOS. A model for voltage-based time-interleaved sampling is introduced with two implementations of highly interleaved analog-to-digital converters (ADCs) for 100 Gb/s communication systems. The model is suitable for ADCs where the analog input bandwidth is of concern and enables a tradeoff between different architectures with respect to the analog input bandwidth, the hold time of the sampled signal, a...
A 2.02-5.16 fJ/Conversion Step 10 Bit Hybrid Coarse-Fine SAR ADC With Time-Domain Quantizer in 90 nm CMOS. This paper presents an ultra-low-voltage and power-efficient 10 bit hybrid successive approximation register (SAR) analog-to-digital converter (ADC). For reducing the digital-to-analog converter (DAC) capacitance and comparator requirement, we propose a hybrid architecture comprising a coarse 7 bit SAR ADC and fine 3.5 bit time-to-digital converter (TDC). The Vcm-based switching method is adopted ...
A 12-bit 8.47-fJ/Conversion-Step Capacitor-Swapping SAR ADC in 110-nm CMOS This paper presents a 12-bit energy-efficient successive approximation register analog-to-digital converter (ADC). By incorporating the proposed capacitor-swapping technique, which eliminates the problematic MSB mismatch transition of a binary-weighted capacitor digital-to-analog converter, the 12-bit linearity of the ADC is achieved without increasing the capacitor size for improved matching. The small capacitor size results in low power consumption. In addition, an on-the-fly programmable dynamic comparator is used for quick comparisons with low noise contributions within the limited power budget. The ADC is fabricated using a 110-nm CMOS process. It consumes 16.47 μW from a 0.9-V supply at a conversion-rate of 1 MS/s. The measured DNL and INL are within 0.3 LSB and 0.56 LSB, respectively. The measured SNDR and SFDR are at 67.3 dB and 87 dB, respectively. The ENOB performance is 10.92 b, which is equivalent to a figure-of-merit of 8.47 fJ/conversion-step.
A 0.5 V 1.1 MS/sec 6.3 fJ/Conversion-Step SAR-ADC With Tri-Level Comparator in 40 nm CMOS This paper presents an extremely low-voltage operation and power efficient successive-approximation-register (SAR) analog-to-digital converter (ADC). Tri-level comparator is proposed to relax the speed requirement of the comparator and decrease the resolution of internal Digital-to-Analog Converter (DAC) by 1-bit. The internal charge redistribution DAC employs unit capacitance of 0.5 fF and ADC operates at nearly thermal noise limitation. To deal with the problem of capacitor mismatch, reconfigurable capacitor array and calibration procedure were developed. The prototype ADC fabricated using 40 nm CMOS process achieves 46.8 dB SNDR and 58.2 dB SFDR with 1.1 MS/sec at 0.5 V power supply. The FoM is 6.3-fJ/conversion step and the chip die area is only 160 μm × 70 μm.
A 15-Channel Digital Active Electrode System for Multi-Parameter Biopotential Measurement This paper presents a digital active electrode (DAE) system for multi-parameter biopotential signal acquisition in portable and wearable devices. It is built around an IC that performs analog signal processing and digitization with the help of on-chip instrumentation amplifiers, a 12 bit ADC and a digital interface. Via a standard ${rm I}^{{2}}{rm C}$ bus, up to 16 digital active electrodes (15-channels) can be connected to a commercially available microcontroller, thus significantly reducing system complexity and cost. In addition, the DAE utilizes an innovative functionally DC-coupled amplifier to preserve input DC signal, while still achieving state-of-the-art performance: 60 nV/sqrt(Hz) input-referred noise and ${pm} $350 mV electrode-offset tolerance. A common-mode feedforward scheme improves the CMRR of an AE pair from 40 dB to maximum 102 dB.
GPUWattch: enabling energy optimizations in GPGPUs General-purpose GPUs (GPGPUs) are becoming prevalent in mainstream computing, and performance per watt has emerged as a more crucial evaluation metric than peak performance. As such, GPU architects require robust tools that will enable them to quickly explore new ways to optimize GPGPUs for energy efficiency. We propose a new GPGPU power model that is configurable, capable of cycle-level calculations, and carefully validated against real hardware measurements. To achieve configurability, we use a bottom-up methodology and abstract parameters from the microarchitectural components as the model's inputs. We developed a rigorous suite of 80 microbenchmarks that we use to bound any modeling uncertainties and inaccuracies. The power model is comprehensively validated against measurements of two commercially available GPUs, and the measured error is within 9.9% and 13.4% for the two target GPUs (GTX 480 and Quadro FX5600). The model also accurately tracks the power consumption trend over time. We integrated the power model with the cycle-level simulator GPGPU-Sim and demonstrate the energy savings by utilizing dynamic voltage and frequency scaling (DVFS) and clock gating. Traditional DVFS reduces GPU energy consumption by 14.4% by leveraging within-kernel runtime variations. More finer-grained SM cluster-level DVFS improves the energy savings from 6.6% to 13.6% for those benchmarks that show clustered execution behavior. We also show that clock gating inactive lanes during divergence reduces dynamic power by 11.2%.
ADRES: An Architecture with Tightly Coupled VLIW Processor and Coarse-Grained Reconfigurable Matrix The coarse-grained reconfigurable architectures have advantages over the traditional FPGAs in terms of delay, area and configuration time. To execute entire applications, most of them combine an instruction set processor(ISP) and a reconfigurable matrix. However, not much attention is paid to the integration of these two parts, which results in high communication overhead and programming difficulty. To address this problem, we propose a novel architecture with tightly coupled very long instruction word (VLIW) processor and coarse-grained reconfigurable matrix. The advantages include simplified programming model, shared resource costs, and reduced communication overhead. To exploit this architecture, our previously developed compiler framework is adapted to the new architecture. The results show that the new architecture has good performance and is very compiler-friendly.
Controlling the cost of reliability in peer-to-peer overlays Structured peer-to-peer overlay networks provide a useful substrate for building distributed applications but there are general concerns over the cost of maintaining these overlays. The current approach is to configure the overlays statically and conservatively to achieve the desired reliability even under uncommon adverse conditions. This results in high cost in the common case, or poor reliability in worse than expected conditions. We analyze the cost of overlay maintenance in realistic dynamic environments and design novel techniques to reduce this cost by adapting to the operating conditions. With our techniques, the concerns over the overlay maintenance cost are no longer warranted. Simulations using real traces show that they enable high reliability and performance even in very adverse conditions with low maintenance cost.
Control of robotic mobility-on-demand systems: A queueing-theoretical perspective AbstractIn this paper we present queueing-theoretical methods for the modeling, analysis, and control of autonomous mobility-on-demand (MOD) systems wherein robotic, self-driving vehicles transport customers within an urban environment and rebalance themselves to ensure acceptable quality of service throughout the network. We first cast an autonomous MOD system within a closed Jackson network model with passenger loss. It is shown that an optimal rebalancing algorithm minimizing the number of (autonomously) rebalancing vehicles while keeping vehicle availabilities balanced throughout the network can be found by solving a linear program. The theoretical insights are used to design a robust, real-time rebalancing algorithm, which is applied to a case study of New York City and implemented on an eight-vehicle mobile robot testbed. The case study of New York shows that the current taxi demand in Manhattan can be met with about 8,000 robotic vehicles (roughly 70% of the size of the current taxi fleet operating in Manhattan). Finally, we extend our queueing-theoretical setup to include congestion effects, and study the impact of autonomously rebalancing vehicles on overall congestion. Using a simple heuristic algorithm, we show that additional congestion due to autonomous rebalancing can be effectively avoided on a road network. Collectively, this paper provides a rigorous approach to the problem of system-wide coordination of autonomously driving vehicles, and provides one of the first characterizations of the sustainability benefits of robotic transportation networks.
Towards elastic SDR architectures using dynamic task management. SDR platforms integrating several types and numbers of processing elements in System-on-Chips become an attractive solution for baseband processing in wireless systems. In order to cope with the diversity of protocol applications and the heterogeneity of multi-core architectures, a hierarchical approach for workload distribution is proposed in this paper. Specifically, a system-level scheduler is employed to map applications to multiple processing clusters, complemented with a cluster-level scheduler - the CoreManager - for dynamic resource allocation and configuration as well as for task and data scheduling. A performance analysis of the proposed approach is presented, which shows the advantages of dynamic scheduling against a static approach for variable workloads in the LTE-Advanced uplink multi-user scenarios.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.068198
0.066667
0.066667
0.03563
0.016667
0.002222
0.000199
0.000016
0
0
0
0
0
0
The Future of Short Reach Interconnect The unprecedented information explosion and its increasing demands on data traffic and processing are pushing a rapid and diverse evolution in short reach interconnect technologies. In the background of this race, CMOS technology is not providing the usual node over node boost in performance to help SerDes developers cope with higher bandwidth and data rates. Breakthroughs in high speed electrical interconnect and new approaches in optical interconnect, such SiPho, NPO (near package optics) and CPO (co-packaged optics) promise improved performance, energy efficiency and density. What are the specific challenges that new and emerging applications such as AI and HPC are posing on interconnect? How various interconnect technologies are going to respond to the need for die disaggregation and multi-die IC products? The focus of this paper is to provide some directions and an underlying logic to help navigate this very complex technology landscape.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Comparative Study of Statistical and Rough Computing Models in Predictive Data Analysis. Information and technology revolution has brought a radical change in the way data are collected. The data collected is of no use unless some useful information is derived from it. Therefore, it is essential to think of some predictive analysis for analyzing data and to get meaningful information. Much research has been carried out in the direction of predictive data analysis starting from statistical techniques to intelligent computing techniques and further to hybridize computing techniques. The prime objective of this paper is to make a comparative analysis between statistical, rough computing, and hybridized techniques. The comparative analysis is carried out over financial bankruptcy data set of Greek industrial bank ETEVA. It is concluded that rough computing techniques provide better accuracy 88.2% as compared to statistical techniques whereas hybridized computing techniques provides still better accuracy 94.1% as compared to rough computing techniques.
scikit-image: Image processing in Python. scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image.
The rise of "big data" on cloud computing: Review and open research issues. Cloud computing is a powerful technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Massive growth in the scale of data or big data generated through cloud computing has been observed. Addressing big data is a challenging and time-demanding task that requires a large computational infrastructure to ensure successful data processing and analysis. The rise of big data in cloud computing is reviewed in this study. The definition, characteristics, and classification of big data along with some discussions on cloud computing are introduced. The relationship between big data and cloud computing, big data storage systems, and Hadoop technology are also discussed. Furthermore, research challenges are investigated, with focus on scalability, availability, data integrity, data transformation, data quality, data heterogeneity, privacy, legal and regulatory issues, and governance. Lastly, open research issues that require substantial research efforts are summarized.
Social big data: Recent achievements and new challenges •The paper presents the methodologies on information fusion for social media.•The methodologies, frameworks, and software used to work with big data are given.•The state of the art in the data analytic techniques on social big data is provided.•Social big data applications for various domains are described and analyzed.
Trends in transportation and logistics. •Overview of the historical contributions of Operational Research to problems in transportation and logistics.•Future trends in transportation and logistics.•Future potential contributions of Operational Research to problems in transportation and logistics.
Unlocking the power of big data in new product development. This study explores how big data can be used to enable customers to express unrecognised needs. By acquiring this information, managers can gain opportunities to develop customer-centred products. Big data can be defined as multimedia-rich and interactive low-cost information resulting from mass communication. It offers customers a better understanding of new products and provides new, simplified modes of large-scale interaction between customers and firms. Although previous studies have pointed out that firms can better understand customers’ preferences and needs by leveraging different types of available data, the situation is evolving, with increasing application of big data analytics for product development, operations and supply chain management. In order to utilise the customer information available from big data to a larger extent, managers need to identify how to establish a customer-involving environment that encourages customers to share their ideas with managers, contribute their know-how, fiddle around with new products, and express their actual preferences. We investigate a new product development project at an electronics company, STE, and describe how big data is used to connect to, interact with and involve customers in new product development in practice. Our findings reveal that big data can offer customer involvement so as to provide valuable input for developing new products. In this paper, we introduce a customer involvement approach as a new means of coming up with customer-centred new product development.
Efficient closed high-utility pattern fusion model in large-scale databases High-Utility Itemset Mining (HUIM) is considered a major issue in recent decades since it reveals profit strategies for use in industry for decision-making. Most existing works have focused on mining high-utility itemsets from databases showing large amount of patterns; however exact decisions are still challenging to make from that large amounts of discovered knowledge. Closed High-utility itemset mining (CHUIM) provides a smart way to present concise high-utility itemsets that can be more effective for making correct decisions. However, none of the existing works have focused on handling large-scale databases to integrate discovered knowledge from several distributed databases. In this paper, we first present a large-scale information fusion architecture to integrate discovered closed high-utility patterns from several distributed databases. The generic composite model is used to cluster transactions regarding their relevant correlation that can ensure correctness and completeness of the fusion model. The well-known MapReduce framework is then deployed in the developed DFM-Miner algorithm to handle big datasets for information fusion and integration. Experiments are then compared to the state-of-the-art CHUI-Miner and CLS-Miner algorithms for mining closed high-utility patterns and the results indicated that the designed model is well designed for handling large-scale databases with less memory usage. Moreover, the designed MapReduce framework can speed up the mining performance of closed high-utility patterns in the developed fusion system.
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
A Logic-in-Memory Computer If, as presently projected, the cost of microelectronic arrays in the future will tend to reflect the number of pins on the array rather than the number of gates, the logic-in-memory array is an extremely attractive computer component. Such an array is essentially a microelectronic memory with some combinational logic associated with each storage element. A logic-in-memory computer is described that is organized around a logic-enhanced ``cache'' memory array. Used as a cache, a logic-in-memory array performs as a high-speed buffer between a conventional CPU and a conventional memory. The effect on the computer system of the cache and its control mechanism is to make the main memory appear to have all of the processing capabilities and almost the same performance as the cache. Operations within the array are naturally organized as operations on blocks of data called ``sectors.'' Among the operations that can be performed are arithmetic and logical operations on pairs of elements from two sectors, and a variety of associative search operations on a single sector. For such operations, the main memory of the computer appears to the program to be composed of a collection of logic-in-memory arrays, each the size of a sector. Because of the high-speed, highly parallel sector operations, the logic-in-memory computer points to a new direction for achieving orders of magnitude increase in computer performance. Moreover, since the computer is specifically organized for large-scale integration, the increased performance might be obtained for a comparatively small dollar cost.
Communication-efficient leader election and consensus with limited link synchrony We study the degree of synchrony required to implement the leader election failure detector Ω and to solve consensus in partially synchronous systems. We show that in a system with n processes and up to f process crashes, one can implement Ω and solve consensus provided there exists some (unknown) correct process with f outgoing links that are eventually timely. In the special case where f = 1 , an important case in practice, this implies that to implement Ω and solve consensus it is sufficient to have just one eventually timely link -- all the other links in the system, Θ(n2) of them, may be asynchronous. There is no need to know which link p → q is eventually timely, when it becomes timely, or what is its bound on message delay. Surprisingly, it is not even required that the source p or destination q of this link be correct: either p or q may actually crash, in which case the link p → q is eventually timely in a trivial way, and it is useless for sending messages. We show that these results are in a sense optimal: even if every process has f - 1 eventually timely links, neither Ω nor consensus can be solved. We also give an algorithm that implements Ω in systems where some correct process has f outgoing links that are eventually timely, such that eventually only f links carry messages, and we show that this is optimal. For f = 1 , this algorithm ensures that all the links, except for one, eventually become quiescent.
Design of a Pressure Control System With Dead Band and Time Delay This paper investigates the control of pressure in a hydraulic circuit containing a dead band and a time varying delay. The dead band is considered as a linear term and a perturbation. A sliding mode controller is designed. Stability conditions are established by making use of Lyapunov Krasovskii functionals, non-perfect time delay estimation is studied and a condition for the effect of uncertainties on the dead zone on stability is derived. Also the effect of different LMI formulations on conservativeness is studied. The control law is tested in practice.
A 13-b 40-MSamples/s CMOS pipelined folding ADC with background offset trimming Two key concepts of pipelining and background offset trimming are applied to demonstrate a 13-b 40-MSamples/s CMOS analog-to-digital converter (ADC) based on the basic folding and interpolation architecture. Folding amplifier stages made of simple differential pairs are pipelined using distributed interstage track-and-holders. Background offset trimming implemented with a highly oversampling delta-sigma modulator enhances the resolution of the CMOS folders beyond 12 bits. The background offset trimming circuit continuously measures and adjusts the offsets of the folding amplifiers without interfering with the normal operation. The prototype system is further refined using subranging and digital correction, and exhibits a spurious-free dynamic range (SFDR) of 82 dB at 40 MSamples/s. The measured differential nonlinearity (DNL) and integral nonlinearity (INL) are about /spl plusmn/0.5 and /spl plusmn/2.0 LSB, respectively. The chip fabricated in 0.5-/spl mu/m CMOS occupies 8.7 mm/sup 2/ and consumes 800 mW at 5 V.
High Frequency Buck Converter Design Using Time-Based Control Techniques Time-based control techniques for the design of high switching frequency buck converters are presented. Using time as the processing variable, the proposed controller operates with CMOS-level digital-like signals but without adding any quantization error. A ring oscillator is used as an integrator in place of conventional opamp-RC or G m-C integrators while a delay line is used to perform voltage to time conversion and to sum time signals. A simple flip-flop generates pulse-width modulated signal from the time-based output of the controller. Hence time-based control eliminates the need for wide bandwidth error amplifier, pulse-width modulator (PWM) in analog controllers or high resolution analog-to-digital converter (ADC) and digital PWM in digital controllers. As a result, it can be implemented in small area and with minimal power. Fabricated in a 180 nm CMOS process, the prototype buck converter occupies an active area of 0.24 mm2, of which the controller occupies only 0.0375 mm2. It operates over a wide range of switching frequencies (10-25 MHz) and regulates output to any desired voltage in the range of 0.6 V to 1.5 V with 1.8 V input voltage. With a 500 mA step in the load current, the settling time is less than 3.5 μs and the measured reference tracking bandwidth is about 1 MHz. Better than 94% peak efficiency is achieved while consuming a quiescent current of only 2 μA/MHz.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
Phantomcache: Obfuscating Cache Conflicts With Localized Randomization Cache conflicts due to deterministic memory-to-cache mapping have long been exploited to leak sensitive information such as secret keys. While randomized mapping is fully investigated for L1 caches, it still remains unresolved about how to secure a much larger last-level cache (LLC). Recent solutions periodically change the mapping strategy to disrupt the crafting of conflicted addresses, which is a critical attack procedure to exploit cache conflicts. Remapping, however, increases both miss rate and access latency. We present PhantomCache for securing an LLC with remapping-free randomized mapping. We propose a localized randomization technique to bound randomized mapping of a memory address within only a limited number of cache sets. The small randomization space offers fast set search over an LLC in a memory access. The intrinsic randomness still suffices to obfuscate conflicts and disrupt efficient exploitation of conflicted addresses. We evaluate PhantomCache against an attacker exploring the state-of-the-art attack with linear-complexity. To secure an 8-bank 16 MB 16-way LLC, PhantomCache confines randomization space of an address within 8 sets and brings only 1.20% performance degradation on individual benchmarks, 0.50% performance degradation on mixed workloads, and 0.50% storage overhead per cache line, which are 2x and 9x more efficient than the state-of-the-art solutions. Moreover, PhantomCache is solely an architectural solution and requires no software change.
CheckMate - Automated Synthesis of Hardware Exploits and Security Litmus Tests. Recent research has uncovered a broad class of security vulnerabilities in which confidential data is leaked through programmer-observable microarchitectural state. In this paper, we present CheckMate, a rigorous approach and automated tool for determining if a microarchitecture is susceptible to specified classes of security exploits, and for synthesizing proof-of-concept exploit code when it is. Our approach adopts "microarchitecturally happens-before" (μhb) graphs which prior work designed to capture the subtle orderings and interleavings of hardware execution events when programs run on a microarchitecture. CheckMate extends μhb graphs to facilitate modeling of security exploit scenarios and hardware execution patterns indicative of classes of exploits. Furthermore, it leverages relational model finding techniques to enable automated exploit program synthesis from microarchitecture and exploit pattern specifications. As a case study, we use CheckMate to evaluate the susceptibility of a speculative out-of-order processor to Flush+Reload cache side-channel attacks. The automatically synthesized results are programs representative of Meltdown and Spectre attacks. We then evaluate the same processor on its susceptibility to a different timing side-channel attack: Prime+Probe. Here, Check-Mate synthesized new exploits that are similar to Meltdown and Spectre in that they leverage speculative execution, but unique in that they exploit distinct microarchitectural behaviors---speculative cache line invalidations rather than speculative cache pollution---to form a side-channel. Most importantly, our results validate the CheckMate approach to formal hardware security verification and the ability of the CheckMate tool to detect real-world vulnerabilities.
Reverse Engineering the Stream Prefetcher for Profit Micro-architectural attacks exploit timing channels at different micro-architecture units. Some of the micro-architecture units like cache automatically provide the timing difference (the difference between a hit and a miss). However, there are other units that are not documented, and their influence on the timing difference is not fully understood. One such micro-architecture unit is an L2 hardware prefetcher named Streamer. In this paper, we reverse-engineer the Stream prefetcher, which is commercially available in the Intel machines. We perform a set of experiments and provide our observations and insights. Further, we use these observations to construct a cross-thread covert channel using the Stream prefetcher, with an accuracy of 91.3% and a bandwidth of 54.44 KBps.
Abusing Cache Line Dirty States to Leak Information in Commercial Processors Caches have been used to construct various types of covert and side channels to leak information. Most existing cache channels exploit the timing difference between cache hits and cache misses. However, we introduce a new and broader classification of cache covert channel attacks: Hit+Miss, Hit+Hit, and Miss+Miss. We highlight that cache misses (or cache hits) for cache lines in different states may have more significant time differences, and these can be used as timing channels. Based on this classification, we propose a new stable and stealthy Miss+Miss cache channel. Write-back caches are widely deployed in modern processors. This paper presents in detail a way in which replacement latency differences can be used to construct timing-based channels (called WB channels) to leak information in a write-back cache. Any modification to a cache line by a sender will set it to the dirty state, and the receiver can observe this through measuring the latency of replacing this cache set. We also demonstrate how senders could exploit a different number of dirty cache lines in a cache set to improve transmission bandwidth with symbols encoding multiple bits. The peak transmission bandwidths of the WB channels in commercial systems can vary between 1300 and 4400 kbps per cache set in a hyper-threaded setting without shared memory between the sender and the receiver. In contrast to most existing cache channels, which always target specific memory addresses, the new WB channels focus on the cache set and cache line states, making it difficult for the channel to be disturbed by other processes on the core, and they can still work in a cache using a random replacement policy. We also analyzed the stealthiness of WB channels from the perspective of the number of cache loads and cache miss rates. We discuss and evaluate possible defenses. The paper finishes by discussing various forms of side-channel attack.
Unveiling Hardware-based Data Prefetcher, a Hidden Source of Information Leakage. Data prefetching is a hardware-based optimization mechanism used in most of the modern microprocessors. It fetches data to the cache before it is needed. In this paper, we present a novel microarchitectural attack that exploits the prefetching mechanism. Our attack targets Instruction pointer (IP)-based stride prefetching in Intel processors. Stride prefetcher detects memory access patterns with a regular stride, which are likely to be found in lookup table-based cryptographic implementations. By monitoring the prefetching activities near the lookup table, attackers can extract sensitive information such as secret keys from victim applications. This kind of leakage from prefetching has never been considered in the design of constant time algorithm to prevent side-channel attacks. We show the potential of the proposed attack by applying it against the Elliptic Curve Diffie-Hellman (ECDH) algorithm built upon the latest version of OpenSSL library. To the best of our knowledge, this is the first microarchitectural side-channel attack exploiting the hardware prefetching of modern microprocessors.
Speculative interference attacks: breaking invisible speculation schemes ABSTRACTRecent security vulnerabilities that target speculative execution (e.g., Spectre) present a significant challenge for processor design. These highly publicized vulnerabilities use speculative execution to learn victim secrets by changing the cache state. As a result, recent computer architecture research has focused on invisible speculation mechanisms that attempt to block changes in cache state due to speculative execution. Prior work has shown significant success in preventing Spectre and other attacks at modest performance costs. In this paper, we introduce speculative interference attacks, which show that prior invisible speculation mechanisms do not fully block speculation-based attacks that use cache state. We make two key observations. First, mis-speculated younger instructions can change the timing of older, bound-to-retire instructions, including memory operations. Second, changing the timing of a memory operation can change the order of that memory operation relative to other memory operations, resulting in persistent changes to the cache state. Using both of these observations, we demonstrate (among other attack variants) that secret information accessed by mis-speculated instructions can change the order of bound-to-retire loads. Load timing changes can therefore leave secret-dependent changes in the cache, even in the presence of invisible speculation mechanisms. We show that this problem is not easy to fix. Speculative interference converts timing changes to persistent cache-state changes, and timing is typically ignored by many cache-based defenses. We develop a framework to understand the attack and demonstrate concrete proof-of-concept attacks against invisible speculation mechanisms. We conclude with a discussion of security definitions that are sufficient to block the attacks, along with preliminary defense ideas based on those definitions.
Cache Games -- Bringing Access-Based Cache Attacks on AES to Practice Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
MorphoSys: An Integrated Reconfigurable System for Data-Parallel and Computation-Intensive Applications This paper introduces MorphoSys, a reconfigurable computing system developed to investigate the effectiveness of combining reconfigurable hardware with general-purpose processors for word-level, computation-intensive applications. MorphoSys is a coarse-grain, integrated, and reconfigurable system-on-chip, targeted at high-throughput and data-parallel applications. It is comprised of a reconfigurable array of processing cells, a modified RISC processor core, and an efficient memory interface unit. This paper describes the MorphoSys architecture, including the reconfigurable processor array, the control processor, and data and configuration memories. The suitability of MorphoSys for the target application domain is then illustrated with examples such as video compression, data encryption and target recognition. Performance evaluation of these applications indicates improvements of up to an order of magnitude (or more) on MorphoSys, in comparison with other systems.
Adaptive Synchronization of an Uncertain Complex Dynamical Network This brief paper further investigates the locally and globally adaptive synchronization of an uncertain complex dynamical network. Several network synchronization criteria are deduced. Especially, our hypotheses and designed adaptive controllers for network synchronization are rather simple in form. It is very useful for future practical engineering design. Moreover, numerical simulations are also given to show the effectiveness of our synchronization approaches.
Enabling open-source cognitively-controlled collaboration among software-defined radio nodes Software-defined radios (SDRs) are now recognized as a key building block for future wireless communications. We have spent the past year enhancing existing open software to create a software-defined data radio. This radio extends the notion of software-defined behavior to higher layers in the protocol stack: most importantly through the media access layer. Our particular approach to the problem has been guided by the desire to allow fine-grained cognitive control of the radio. We describe our system, Adaptive Dynamic Radio Open-source Intelligent Team (ADROIT).
A 2.87±0.19dB NF 3.1∼10.6GHz ultra-wideband low-noise amplifier using 0.18µm CMOS technology.
Reduction and IR-drop compensations techniques for reliable neuromorphic computing systems Neuromorphic computing system (NCS) is a promising architecture to combat the well-known memory bottleneck in Von Neumann architecture. The recent breakthrough on memristor devices made an important step toward realizing a low-power, small-footprint NCS on-a-chip. However, the currently low manufacturing reliability of nano-devices and the voltage IR-drop along metal wires and memristors arrays severely limits the scale of memristor crossbar based NCS and hinders the design scalability. In this work, we propose a novel system reduction scheme that significantly lowers the required dimension of the memristor crossbars in NCS while maintaining high computing accuracy. An IR-drop compensation technique is also proposed to overcome the adverse impacts of the wire resistance and the sneak-path problem in large memristor crossbar designs. Our simulation results show that the proposed techniques can improve computing accuracy by 27.0% and 38.7% less circuit area compared to the original NCS design.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.2
0.2
0.2
0.2
0.1
0.05
0.007143
0
0
0
0
0
0
0
Adaptive blind compensation of gain and timing mismatches in M-channel time-interleaved ADCs Gain and timing mismatches among sub-converters limit the performance of time-interleaved analog-to-digital converters (TIADCs). In this paper we present a blind adaptive method, based on the least-mean-square (LMS) algorithm, to compensate gain and timing mismatches in TIADCs. Similar to other methods in the literature, we assume a slightly oversampled input signal, but, contrary to them, we can apply our method to an arbitrary number of channels in a straightforward way. We give a detailed description of the compensation and the identification part of the method and demonstrate its effectiveness through numerical simulations.
A Polynomial-Based Time-Varying Filter Structure for the Compensation of Frequency-Response Mismatch Errors in Time-Interleaved ADCs This paper introduces a structure for the compensation of frequency-response mismatch errors in M-channel time-interleaved analog-to-digital converters (ADCs). It makes use of a number of fixed digital filters, approximating differentiators of different orders, and a few variable multipliers that correspond to parameters in polynomial models of the channel frequency responses. Whenever the channel frequency responses change, which occurs from time to time in a practical time-interleaved ADC, it suffices to alter the values of these variable multipliers. In this way, expensive on-line filter design is avoided. The paper includes several design examples that illustrate the properties and capabilities of the proposed structure.
Seven-bit 700-MS/s Four-Way Time-Interleaved SAR ADC With Partial $V_{\mathrm {cm}}$ -Based Switching. This brief presents a 7-bit 700-MS/s four-way time-interleaved successive approximation register (SAR) analog-to-digital converter (ADC). A partial Vcm-based switching method is proposed that requires less digital overhead from the SAR controller and achieves better conversion accuracy. Compared with switchback switching, the proposed method can further reduce the common mode variation by 50%. In ...
Low complexity digital background calibration algorithm for the correction of timing mismatch in time-interleaved ADCs. A low-complexity post-processing algorithm to estimate and compensate for timing skew error in a four-channel time-interleaved analog to digital converter (TIADC) is presented in this paper, together with its hardware implementation. The Lagrange interpolator is used as the reconstruction filter which alleviates online interpolator redesign by using a simplified representation of coefficients. Simulation results show that the proposed algorithm can suppress error tones for input signal frequency from 0 to 0.4fs. The proposed structure has, at least, 41% reduction in the number of required multipliers. Implementation of the algorithm, for a four-channel 10-bit TIADC, show that, for a 0.4fs input signal frequency, the Signal to Noise and Distortion Ratio (SNDR) and Spurious-Free Dynamic Range (SFDR) are improved 31.26 dB and 43.7 dB, respectively. Our proposed approximation technique does not degrade the performance of system, resulting in the same SNDR and SFDR as the exact coefficient values. In addition, the proposed structure provides an acceptable performance in the presence of wideband signals.
A Digital Adaptive Calibration Method Of Timing Mismatch In Tiadc Based On Adjacent Channels Lagrange Mean Value Difference This paper presents a digital adaptive calibration method to overcome the effect of timing mismatches in the time-interleaved analog-to-digital converter (TIADC). The structure of the channel splitting-recombining is proposed. The TIADC with M channels is divided into log(2)M stages for calibration, and each stage is composed of one or more two-channel sub-systems. The Lagrange mean value difference of adjacent channels is constructed by arithmetical approximation to estimate the timing mismatch. It does not need to oversample the bandlimited input signal with an oversampling ratio. And it does not require the input signal to have significant power over its bandwidth for identification. Timing mismatches are corrected by the cascaded differentiator-multipliers. A reuse structure is developed to reduce the consumption of hardware resources. Simulation results of the four-channel TIADC show that the spurious-free dynamic range (SFDR) can be improved from 31.19 to 73.39 dB. Hardware implementation based on FPGA is also carried out, and the corrected SFDR is measured at 72.29 dB. The approach has low complexity, and the bandwidth of the input signal can reach the Nyquist bandwidth of the complete TIADC system.
A 10-bit 2.6-GS/s Time-Interleaved SAR ADC With a Digital-Mixing Timing-Skew Calibration Technique. A 16-channel time-interleaved 10-bit SAR analog-to-digital converter (ADC), employing the proposed delta-sampling auxiliary SAR ADCs and a digital-mixing calibration technique to compensate timing-skew error, achieves a 2.6-GS/s sampling rate. The ADC has been fabricated in a 40-nm CMOS technology and achieves a 50.6-dB signal-to-noise-and-distortion ratio at Nyquist rate while dissipating 18.4 mW...
Joint mismatch and channel compensation for high-speed OFDM receivers with time-interleaved ADCs Analog-to-digital converters (ADCs) with high sampling rates and output resolution are required for the design of mostly digital transceivers in emerging multi-Gigabit communication systems. A promising approach is to use a time-interleaved (TI) architecture with slower sub-ADCs in parallel, but mismatch among the sub-ADCs, if left uncompensated, can cause error floors in receiver performance. Conventional mismatch compensation schemes typically have complexity (in terms of number of multiplications) that increases with the desired resolution at the output of the TI-ADC. In this paper, we investigate an alternative approach, in which mismatch and channel dispersion are compensated jointly, with the performance metric being overall link reliability rather than ADC performance. For an OFDM system, we characterize the structure of mismatch-induced interference, and demonstrate the efficacy of a frequency-domain interference suppression scheme whose complexity is independent of constellation size (which determines the desired resolution). Numerical results from computer simulation and from experiments on a hardware prototype show that the performance with the proposed joint mismatch and channel compensation technique is close to that without mismatch. While the proposed technique works with offline estimates of mismatch parameters, we provide an iterative, online method for joint estimation of mismatch and channel parameters which leverages the training overhead already available in communication signals.
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
The Influence of the Sigmoid Function Parameters on the Speed of Backpropagation Learning Sigmoid function is the most commonly known function used in feed forward neural networks because of its nonlinearity and the computational simplicity of its derivative. In this paper we discuss a variant sigmoid function with three parameters that denote the dynamic range, symmetry and slope of the function respectively. We illustrate how these parameters influence the speed of backpropagation learning and introduce a hybrid sigmoidal network with different parameter configuration in different layers. By regulating and modifying the sigmoid function parameter configuration in different layers the error signal problem, oscillation problem and asymmetrical input problem can be reduced. To compare the learning capabilities and the learning rate of the hybrid sigmoidal networks with the conventional networks we have tested the two-spirals benchmark that is known to be a very difficult task for backpropagation and their relatives.
Error exponents for asymmetric two-user discrete memoryless source-channel coding systems We study the transmission of two discrete memoryless correlated sources, consisting of a common and a private source, over a discrete memoryless multiterminal channel with two transmitters and two receivers. At the transmitter side, the common source is observed by both encoders but the private source can only be accessed by one encoder. At the receiver side, both decoders need to reconstruct the common source, but only one decoder needs to reconstruct the private source. We hence refer to this system by the asymmetric two-user source-channel coding system. We derive a universally achievable lossless joint source-channel coding (JSCC) error exponent pair for the two-user system by using a technique which generalizes Csiszár's type-packing lemma (1980) for the point-to-point (single-user) discrete memoryless source-channel system. We next investigate the largest convergence rate of asymptotic exponential decay of the system (overall) probability of erroneous transmission, i.e., the system JSCC error exponent. We obtain lower and upper bounds for the exponent. As a consequence, we establish a JSCC theorem with single-letter characterization and we show that the separation principle holds for the asymmetric two-user scenario. By introducing common randomization, we also provide a formula for the tandem (separate) source-channel coding error exponent. Numerical examples show that for a large class of systems consisting of two correlated sources and an asymmetric multiple-access channel with additive noise, the JSCC error exponent considerably outperforms the corresponding tandem coding error exponent.
Stacked-Chip Implementation of On-Chip Buck Converter for Distributed Power Supply System in SiPs An on-chip buck converter which is implemented by stacking chips and suitable for on-chip distributed power supply systems is proposed. The operation of the converter with 3-D chip stacking is experimentally verified for the first time. The manufactured converter achieves a maximum power efficiency of 62% for an output current of 70 mA and a voltage conversion ratio of 0.7 with a switching frequen...
Efficiency of a Regenerative Direct-Drive Electromagnetic Active Suspension. The efficiency and power consumption of a direct-drive electromagnetic active suspension system for automotive applications are investigated. A McPherson suspension system is considered, where the strut consists of a direct-drive brushless tubular permanent-magnet actuator in parallel with a passive spring and damper. This suspension system can both deliver active forces and regenerate power due to imposed movements. A linear quadratic regulator controller is developed for the improvement of comfort and handling (dynamic tire load). The power consumption is simulated as a function of the passive damping in the active suspension system. Finally, measurements are performed on a quarter-car test setup to validate the analysis and simulations.
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.072469
0.070593
0.066667
0.066667
0.066667
0.033333
0.003704
0
0
0
0
0
0
0
Multi-Band Frequency Transformations, Matching Networks and Amplifiers. In this paper, a technique for the synthesis of lumped element multi-band matching networks is proposed using frequency transformations. The proposed technique has been generalized for n -bands using 1→ n frequency transformations. The effect of the transformations on the bandwidth of the matching network and the effect of inductor losses on the transducer loss of the matching network are analyzed...
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Scalable, dynamic and growing hardware self-organizing architecture for real-time vector quantization In the era of the Internet of Things (IoT) and Big Data (BD), a significant amount of data is permanently generated every day. The data size of collected data streams is now reaching zetta bytes (i.e., 10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">21</sup> ), and their processing and analysis becomes more and more challenging especially in embedded systems, where the overall goal is to maximize performance per watt, while meeting real-time requirements and trying to keep the overall power consumption in the very limited power budgets. The collected data are often reduced by means of clustering, vector quantization or compression before their further processing. The unsupervised learning techniques such as Self-Organizing Maps (SOMs) not needing any prior knowledge of processed data are perfect candidates for this task. However, real-time vector quantization with SOMs requires high performances and dynamic online configurability. The software counterparts of SOMs are highly flexible with limited performances per watt whereas the hardware SOMs generally lack of flexibility. In this paper, a novel scalable, dynamic and growing hardware self-organizing map (SOM) is presented. The presented hardware SOM architecture is dynamically configurable and adaptable in terms of neurons, map size and vector dimension depending on the application-specific needs. The proposed architecture is validated on different map sizes (up to 16×16) with different vector widths applied for real-time color quantization and pattern distribution recognition.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
20.5 A 2-/3-phase fully integrated switched-capacitor DC-DC converter in bulk CMOS for energy-efficient digital circuits with 14% efficiency improvement Reducing the supply voltage of digital circuits to the sub- or near-threshold regions minimizes dynamic power consumption and achieves better efficiency [1]. This technique is widely used in energy-efficient applications, and is especially beneficial for wirelessly powered devices such as wearable electronics, biomedicai implants and smart sensor networks. Such devices have long standby times and battery-less operation is highly desirable. As shown in Fig. 20.5.1, for a typical wireless power transmission system, there is a gap between the rectified VIN (>2V) and the low supply voltage VOUT (<;700mV) for powering up energy-efficient digital circuits. To bridge this voltage gap without sacrificing compact size, fully integrated power converters with a low voltage conversion ratio (M=Vout/VIN) and high efficiency are needed. However, a low M results in low efficiency for linear regulators and fully integrated buck converters. On the other hand, fully integrated switched-capacitor power converters (SCPCs) are good alternatives that can achieve high efficiency at low M in low power applications [2-5].
Input-adaptive dual-output power management unit for energy harvesting devices An input-adaptive dual-output charge pump (DOQP) with variable fractional conversion ratio and low dropout regulators (LDRs) in cascade is implemented for the power management unit (PMU) of implantable energy harvesting devices. The charge pump has one step-down and one step-up output adaptively converted from a 1.8 to 4.0V harvested energy source, and the outputs of the LDRs are 1V and 3V respectively. To improve the overall efficiency, conversion ratios of k/6 (k=2,..., 12) are realized by 1/2- and 1/3-capacitors using an interleaving scheme. The PMU is designed using a 0.13μm 3.3V CMOS process, and attains the peak efficiency of 81.3% and efficiency better than 55% for a wide input range.
Analysis and Design Strategy of On-Chip Charge Pumps for Micro-power Energy Harvesting Applications.
A digitally-controlled 2-/3-phase 6-ratio switched- capacitor DC-DC converter with adaptive ripple reduction and efficiency improvements. A digitally controlled 2-/3-phase 6-ratio switched-capacitor (SC) DC-DC converter with low output voltage ripple and high efficiency is presented. Operating with a wide input voltage range of 1.6V to 3.3V, this SC converter can deliver a maximum power of 250mW to an output of 0.5V to 3V. Six voltage conversion ratios (VCRs) can be generated with only 2 flying capacitors by using 2- or 3-phase operation. Compared with a 2-phase SC converter, the maximum efficiency improvement is 20%. An adaptive ripple reduction scheme is proposed to achieve 4 times reduction in the output voltage ripple. Complexity of controller design is reduced by using digital synthesis and the technique is scalable. Fast loop response is achieved by synchronized hysteretic control. The converter achieves a peak efficiency of 91%.
A Light-Load Efficient Fully Integrated Voltage Regulator in 14-nm CMOS With 2.5-nH Package-Embedded Air-Core Inductors Fully integrated voltage regulators (FIVRs) offer many advantages, such as fine-grained power management, fast transient response, and reduced form factor. This article addresses light-load efficiency in FIVRs with nH-scale air-core inductors. The challenges of implementing efficient discontinuous conduction mode (DCM) operation at high switching frequencies are discussed, which include zero current detection, inductor ac-loss effects, and power delivery network (PDN) resonances. A prototype in 14-nm CMOS is presented, which shows the DCM operation at up to 70 MHz with a peak efficiency of 88% for 1.6–1.2-V conversion.
20.4 A 123-phase DC-DC converter-ring with fast-DVS for microprocessors Inspired by The Square of Vatican City, a fully integrated step-down switched-capacitor DC-DC converter ring with 100+ phases is designed with a fast dynamic voltage scaling (DVS) feature for the microprocessor in portable or wearable devices. As shown in Fig. 20.4.1, this symmetrical ring-shaped converter surrounds its load in the square and supplies the on-chip power grid, such that a good quality power supply can be easily accessed at any point of the chip edges. There are 30 phases on the top edge and 31 phases on each of the other 3 edges, making 123 phases in total. The phase number and unit cell dimensions of this architecture can easily be adjusted to fit the floor plan of the load. The pads of the converter-ring are placed at the corners, and will not affect the pads of the load. Moreover, by using the proposed VDD-controlled oscillator (VDDCO), the frequency of which is controlled by varying its supply voltage, a hitherto unexplored feature of the multiphase DC-DC architecture is exposed: the control-loop unity gain frequency (UGF) could be designed to be higher than the switching frequency.
Multi-Phase 1 GHz Voltage Doubler Charge Pump in 32 nm Logic Process A multi-phase 1 GHz charge pump in 32 nm logic process demonstrates a compact area (159 × 42 ¿m2) for boosting supply voltage from twice the threshold voltage (2 Vth) to 3-4 Vth. Self contained clocking with metal-finger flying capacitors enable embedding voltage boost functionality in close proximity to digital logic for supplying low current Vmin requirement of state elements in logic blocks. Multi-phase operation with phase separation of the order of buffer delays avoids the need for a large storage reservoir capacitor. Special configuration of the pump stages to work in parallel enables a fast (5 ns) output transition from disable to enable state. The multi-phase pump operated as a 1 V to 2 V doubler with >5 mA output capability addresses the need for a gated power delivery solution for logic blocks having state-preservation Vmin requirements.
Ultra-Low Power VLSI Circuit Design Demystified and Explained: A Tutorial. In this paper, the state of the art in ultra-low power (ULP) VLSI design is presented within a unitary framework for the first time. A few general principles are first introduced to gain an insight into the design issues and the approaches that are specific to ULP systems, as well as to better understand the challenges that have to be faced in the foreseeable future. Intuitive understanding is acc...
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
REDMAN: An optimistic replication middleware for read-only resources in dense MANETs The spread of wireless portable devices is pushing towards service provisioning over dense Mobile Ad hoc NETworks (MANETs), i.e., limited spatial regions, such as shopping malls and airports, where a high number of mobile peers can autonomously cooperate without a statically deployed network infrastructure. The paper proposes the REDMAN middleware to manage, retrieve, and disseminate replicas of data/service components to cooperating nodes in a dense MANET. The guideline is to exploit high node population to enable optimistic lightweight resource replication capable of tolerating node exits/failures. REDMAN adopts original approximated solutions, specifically designed for dense MANET, that have demonstrated good scalability and limited overhead for dense MANET configuration (node identification and manager election), for replica distribution/retrieval, and for lazily consistent replica degree maintenance.
A Resolution-Reconfigurable 5-to-10-Bit 0.4-to-1 V Power Scalable SAR ADC for Sensor Applications A power-scalable SAR ADC for sensor applications is presented. The ADC features a reconfigurable 5-to-10-bit DAC whose power scales exponentially with resolution. At low resolutions where noise and linearity requirements are reduced, supply voltage scaling is leveraged to further reduce the energy-per-conversion. The ADC operates up to 2 MS/s at 1 V and 5 kS/s at 0.4 V, and its power scales linearly with sample rate down to leakage levels of 53 nW at 1 V and 4 nW at 0.4 V. Leakage power-gating during a SLEEP mode in between conversions reduces total power by up to 14% at sample rates below 1 kS/s. Prototyped in a low-power 65 nm CMOS process, the ADC in 10-bit mode achieves an INL and DNL of 0.57 LSB and 0.58 LSB respectively at 0.6 V, and the Nyquist SNDR and SFDR are 55 dB and 69 dB respectively at 0.55 V and 20 kS/s. The ADC achieves an optimal FOM of 22.4 fJ/conversion-step at 0.55 V in 10-bit mode. The combined techniques of DAC resolution and voltage scaling maximize efficiency at low resolutions, resulting in an FOM that increases by only 7x over the 5-bit scaling range, improving upon a 32x degradation that would otherwise arise from truncation of bits from an ADC of fixed resolution and voltage.
Blind adaptive estimation of modulation scheme for software defined radio A software defined radio (SDR) system is used to reconfigure all or many specifications such as modulation scheme, demodulation, coding etc. Therefore, supplementary information allow recognition of these specifications by the receiver is required. Since much redundant supplementary information reduces efficiency in transmission, we have aimed to carry out blind adaptive demodulation which, can adaptively demodulate without supplementary information. This paper proposes and investigates both a blind adaptive algorithm for estimating channel and that for estimating a modulation scheme. Computer simulation evaluates the proposed blind estimation algorithm
PuDianNao: A Polyvalent Machine Learning Accelerator Machine Learning (ML) techniques are pervasive tools in various emerging commercial applications, but have to be accommodated by powerful computer systems to process very large data. Although general-purpose CPUs and GPUs have provided straightforward solutions, their energy-efficiencies are limited due to their excessive supports for flexibility. Hardware accelerators may achieve better energy-efficiencies, but each accelerator often accommodates only a single ML technique (family). According to the famous No-Free-Lunch theorem in the ML domain, however, an ML technique performs well on a dataset may perform poorly on another dataset, which implies that such accelerator may sometimes lead to poor learning accuracy. Even if regardless of the learning accuracy, such accelerator can still become inapplicable simply because the concrete ML task is altered, or the user chooses another ML technique. In this study, we present an ML accelerator called PuDianNao, which accommodates seven representative ML techniques, including k-means, k-nearest neighbors, naive bayes, support vector machine, linear regression, classification tree, and deep neural network. Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, PuDianNao can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm^2, and consumes 596 mW only. Compared with the NVIDIA K20M GPU (28nm process), PuDianNao (65nm process) is 1.20x faster, and can reduce the energy by 128.41x.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.020013
0.019755
0.019601
0.019448
0.018182
0.012245
0.006442
0.000444
0
0
0
0
0
0
A cross coupled low phase noise oscillator using an output swing enhancement technique new voltage controlled oscillator (VCO) in a [email protected] CMOS process is offered in this paper. This [email protected]?s argument is to provide an innovative approach to improve the phase noise which is one of the most controversial issues in VCOs. Contrary to most ideas that have been put forward to decrease phase noise which are based on higher current dissipation to increase output voltage swing, this new method offers better specifications with respect to traditional solutions. The presented circuit is capable of extra oscillation amplitude without increasing the current level, taking advantages of tail current elimination and topology optimization. Analysis of the presented peak voltage amplitude can verify the optimum performance of the proposed. Post-layout simulation results at 2.3GHz with an offset frequency of 1MHz and 3MHz show a phase noise of about -125dBc/Hz and -136.5dBc/Hz, respectively, with the current of 1.3mA from 1.8V supply. Also, Monte Carlo simulation is used to ensure the sensitivity of the proposed circuit to process and frequency variations are very promising.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Advances in the design of wideband receivers To be practical wideband receivers must tolerate large out-of-band blockers which can desensitize the receiver through gain compression or reciprocal mixing with LO phase noise. This paper reviews how a new noise-cancelling receiver architecture - that utilises 3 important circuit innovations - mitigates gain compression without compromising noise figure. While the architecture is still susceptible to reciprocal mixing it is shown how a recently proposed reciprocal mixing cancelling technique (if incorporated into the receiver) can eliminate the need for a dramatic rise in LOGEN current.
Jitter-Power Trade-Offs in PLLs As new applications impose jitter values in the range of a few tens of femtoseconds, the design of phase-locked loops faces daunting challenges. This paper derives basic relations between the tolerable jitter and the power consumption, predicting severe issues as jitters below 10 fs are sought. The results are also applied to the sampling clocks in analog-to-digital converters and suggest that clock generation may consume a greater power than the converter itself.
Implicit Common-Mode Resonance in LC Oscillators. The performance of a differential LC oscillator can be enhanced by resonating the common mode of the circuit at twice the oscillation frequency. When this technique is correctly employed, Q-degradation due to the triode operation of the differential pair is eliminated and flicker noise is nulled. Until recently, one or more tail inductors have been used to achieve this common-mode resonance. In th...
An Inverse-Class-F CMOS Oscillator With Intrinsic-High-Q First Harmonic and Second Harmonic Resonances. This paper details the theory and implementation of an inverse-class-F (class-F-1) CMOS oscillator. It features: 1) a single-ended PMOS-NMOS-complementary architecture to generate the differential outputs and 2) a transformer-based two-port resonator to boost the drain-to-gate voltage gain (AV) while creating two intrinsic-high-Q impedance peaks at the fundamental ( fLO) and double (2 fLO) oscilla...
Bird'S-Eye View Of Analog And Mixed-Signal Chips For The 21st Century The Internet of Everything (IoE), clearly a 21st century's technology, brilliantly plays with digital data obtained from analog sources, bringing together two different realities, the analog (physical/real), and the digital (cyber/virtual) worlds. Then, with the boundaries of IoE still analog in nature, the required functions at the interface involve sensing, measuring, filtering, converting, processing, and connecting, which imply that the analog layer governs the entire system in terms of accuracy and precision. Furthermore, such interface integrates several analog and mixed-signal subsystems that comprise mainly signal transmission and reception, frequency generation, energy harvesting, data, and power conversion. This paper sets forth a state-of-the-art design perspective of some of the most critical building blocks used in the analog/digital interface, covering wireless cellular transceivers, millimeter-wave frequency generators, energy harvesting interfaces, plus, data and power converters, that exhibit high quality performance achieved through low-power consumption, high energy-efficiency, and high speed.
A 1.4mW 4.90-to-5.65GHz Class-C CMOS VCO with an Average FoM of 194.5dBc/Hz.
Channel Selection at RF Using Miller Bandpass Filters Channel selection at the input of RF receivers can considerably relax linearity requirements, leading to low-power, compact implementations. A GSM/WCDMA/802.11b/g receiver incorporates a Miller bandpass filter and its variants to achieve a channel bandwidth from 350 kHz to 20 MHz and a noise figure of 2.9 dB while consuming 20 mW. Fabricated in 65 nm CMOS technology, the receiver withstands a 0 dBm blocker at 20 MHz offset and exhibits a noise figure of 5.1 dB.
A 2.0 Gb/s Clock-Embedded Interface for Full-HD 10-Bit 120 Hz LCD Drivers With 1/5-Rate Noise-Tolerant Phase and Frequency Recovery A 2.0 Gb/s clock-embedded interface for LCD drivers, Advanced-PPmL¿, has been developed for high-speed data transfer and reduced area in transmission media. Only one pair of differential signals is needed to control the LCD driver and to display images. A newly developed 1/5-rate phase frequency detector helps achieve a 25% power reduction compared with a half-rate architecture. Pulse filtering of phase control signals and a 4B5B-based interface protocol have been developed for noise-tolerant clock recovery. Power consumption in the clock and data recovery (CDR) is 93 mW with a 3.0 V supply. The rms jitter in the recovered clock is 11 ps when a PRBS7 pattern is used.
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph (Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445-452; Hen- drickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993). From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to con- sistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan-Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear pro- gramming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.
Graph exploration by a finite automaton A finite automaton, simply referred to as a robot, has to explore a graph whose nodes are unlabeled and whose edge ports are locally labeled at each node. The robot has no a priori knowledge of the topology of the graph or of its size. Its task is to traverse all the edges of the graph. We first show that, for any K-state robot and any d ≥ 3, there exists a planar graph of maximum degree d with at most K + 1 nodes that the robot cannot explore. This bound improves all previous bounds in the literature. More interestingly, we show that, in order to explore all graphs of diameter D and maximum degree d, a robot needs Ω(D log d) memory bits, even if we restrict the exploration to planar graphs. This latter bound is tight. Indeed, a simple DFS up to depth D + 1 enables a robot to explore any graph of diameter D and maximum degree d using a memory of size O(D log d) bits. We thus prove that the worst case space complexity of graph exploration is Θ(D log d) bits.
A 6.5 GHz wideband CMOS low noise amplifier for multi-band use LNA based on a noise-cancelled common gate topology spans 0.1 to 6.5 GHz with a gain of 19 dB, a NF of 3 dB, and s11 < -10 dB. It is realized in 0.13-mum CMOS and dissipates 12 mW
Computational intelligence for heart disease diagnosis: A medical knowledge driven approach This paper investigates a number of computational intelligence techniques in the detection of heart disease. Particularly, comparison of six well known classifiers for the well used Cleveland data is performed. Further, this paper highlights the potential of an expert judgment based (i.e., medical knowledge driven) feature selection process (termed as MFS), and compare against the generally employed computational intelligence based feature selection mechanism. Also, this article recognizes that the publicly available Cleveland data becomes imbalanced when considering binary classification. Performance of classifiers, and also the potential of MFS are investigated considering this imbalanced data issue. The experimental results demonstrate that the use of MFS noticeably improved the performance, especially in terms of accuracy, for most of the classifiers considered and for majority of the datasets (generated by converting the Cleveland dataset for binary classification). MFS combined with the computerized feature selection process (CFS) has also been investigated and showed encouraging results particularly for NaiveBayes, IBK and SMO. In summary, the medical knowledge based feature selection method has shown promise for use in heart disease diagnostics.
Architectural Support for Dynamic Linking All software in use today relies on libraries, including standard libraries (e.g., C, C++) and application-specific libraries (e.g., libxml, libpng). Most libraries are loaded in memory and dynamically linked when programs are launched, resolving symbol addresses across the applications and libraries. Dynamic linking has many benefits: It allows code to be reused between applications, conserves memory (because only one copy of a library is kept in memory for all the applications that share it), and allows libraries to be patched and updated without modifying programs, among numerous other benefits. However, these benefits come at the cost of performance. For every call made to a function in a dynamically linked library, a trampoline is used to read the function address from a lookup table and branch to the function, incurring memory load and branch operations. Static linking avoids this performance penalty, but loses all the benefits of dynamic linking. Given its myriad benefits, dynamic linking is the predominant choice today, despite the performance cost. In this work, we propose a speculative hardware mechanism to optimize dynamic linking by avoiding executing the trampolines for library function calls, providing the benefits of dynamic linking with the performance of static linking. Speculatively skipping the memory load and branch operations of the library call trampolines improves performance by reducing the number of executed instructions and gains additional performance by reducing pressure on the instruction and data caches, TLBs, and branch predictors. Because the indirect targets of library call trampolines do not change during program execution, our speculative mechanism never misspeculates in practice. We evaluate our technique on real hardware with production software and observe up to 4% speedup using only 1.5KB of on-chip storage.
An Event-Driven Quasi-Level-Crossing Delta Modulator Based on Residue Quantization This article introduces a digitally intensive event-driven quasi-level-crossing (quasi-LC) delta-modulator analog-to-digital converter (ADC) with adaptive resolution (AR) for Internet of Things (IoT) wireless networks, in which minimizing the average sampling rate for sparse input signals can significantly reduce the power consumed in data transmission, processing, and storage. The proposed AR quasi-LC delta modulator quantizes the residue voltage signal with a 4-bit asynchronous successive-approximation-register (SAR) sub-ADC, which enables a straightforward implementation of LC and AR algorithms in the digital domain. The proposed modulator achieves data compression by means of a globally signal-dependent average sampling rate and achieves AR through a digital multi-level comparison window that overcomes the tradeoff between the dynamic range and the input bandwidth in the conventional LC ADCs. Engaging the AR algorithm reduces the average sampling rate by a factor of 3 at the edge of the modulator’s signal bandwidth. The proposed modulator is fabricated in 28-nm CMOS and achieves a peak SNDR of 53 dB over a signal bandwidth of 1.42 MHz while consuming 205 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> and an active area of 0.0126 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> .
1.1155
0.07
0.06
0.06
0.06
0.019723
0.006667
0.0011
0
0
0
0
0
0
A 2.9–4.0-GHz Fractional-N Digital PLL With Bang-Bang Phase Detector and 560- Integrated Jitter at 4.5-mW Power This paper introduces a ΔΣ fractional-N digital PLL based on a single-bit TDC. A digital-to-time converter, placed in the feedback path, cancels out the quantization noise introduced by the dithering of the frequency divider modulus and permits to achieve low noise at low power. The PLL is implemented in a standard 65-nm CMOS process. It achieves - 102-dBc/Hz phase noise at 50-kHz offset and a total absolute jitter below 560 fsrms (integrated from 3 kHz to 30 MHz), even in the worst-case of a -42-dBc in-band fractional spur. The synthesizer tuning range spans from 2.92 GHz to 4.05 GHz with 70-Hz resolution. The total power consumption is 4.5 mW, which leads to the best jitter-power trade-off obtained with a fractional-N synthesizer. The synthesizer demonstrates the capability of frequency modulation up to 1.25-Mb/s data rate.
Digital Background Correction of Harmonic Distortion in Pipelined ADCs. Pipelined analog-to-digital converters (ADCs) are sensitive to distortion introduced by the residue amplifiers in their first few stages. Unfortunately, residue amplifier distortion tends to be inversely related to power consumption in practice, so the residue amplifiers usually are the dominant consumers of power in high-resolution pipelined ADCs. This paper presents a background calibration tech...
A Fractional-N Sub-Sampling PLL using a Pipelined Phase-Interpolator With an FoM of -250 dB. A fractional-N sub-sampling PLL architecture based on pipelined phase-interpolator and Digital-to-Time-Converter (DTC) is presented in this paper. The combination of pipelined phase-interpolator and DTC enables efficient design of the multi-phase generation mechanism required for the fractional operation. This technique can be used for designing a fractional-N PLL with low in-band phase noise and ...
A 2.4-GHz 1.5-mW Digital Multiplying Delay-Locked Loop Using Pulsewidth Comparator and Double Injection Technique. In this paper, we propose a low-jitter low-power digital multiplying delay-locked loop (MDLL) with a self-calibrated double reference injection scheme. To reduce jitter, the noisy edge of the oscillator is replaced by both the rising and falling edges of the clean reference, which results in 6-dB reduction in phase noise compared with a conventional single-edge injection MDLL. Reference spur cause...
A Comprehensive Phase Noise Analysis of Bang-Bang Digital PLLs This work introduces an accurate linearized model and phase noise spectral analysis of digital bang-bang PLLs, that includes both the reference and the digitally-controlled oscillator (DCO) noise contributions. A time-domain analysis of bang-bang PLLs is leveraged to derive closed-form expressions for the integrated jitter, leading to a precise estimation of the binary phase detector (BPD) equival...
A Fractional-N Divider-Less Phase-Locked Loop With a Subsampling Phase Detector A low-noise divider-less PLL, employing a subsampling locked loop, samples the VCO output by a digital pulse-width modulator (DPWM) to perform fractional-N operation. The frequency synthesizer achieves a low in-band phase noise of -112 dBc/Hz at a 2.3 GHz output frequency. The analysis for the frequency synthesizer, especially for the nonlinear characteristics of the circuits, is proposed. Fabricated in a 0.18 μm CMOS technology, the frequency synthesizer consumes 9.6 mA and achieves figure-of-merit of -239.1 dB, corresponding to 266 fs rms jitter.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
GPUWattch: enabling energy optimizations in GPGPUs General-purpose GPUs (GPGPUs) are becoming prevalent in mainstream computing, and performance per watt has emerged as a more crucial evaluation metric than peak performance. As such, GPU architects require robust tools that will enable them to quickly explore new ways to optimize GPGPUs for energy efficiency. We propose a new GPGPU power model that is configurable, capable of cycle-level calculations, and carefully validated against real hardware measurements. To achieve configurability, we use a bottom-up methodology and abstract parameters from the microarchitectural components as the model's inputs. We developed a rigorous suite of 80 microbenchmarks that we use to bound any modeling uncertainties and inaccuracies. The power model is comprehensively validated against measurements of two commercially available GPUs, and the measured error is within 9.9% and 13.4% for the two target GPUs (GTX 480 and Quadro FX5600). The model also accurately tracks the power consumption trend over time. We integrated the power model with the cycle-level simulator GPGPU-Sim and demonstrate the energy savings by utilizing dynamic voltage and frequency scaling (DVFS) and clock gating. Traditional DVFS reduces GPU energy consumption by 14.4% by leveraging within-kernel runtime variations. More finer-grained SM cluster-level DVFS improves the energy savings from 6.6% to 13.6% for those benchmarks that show clustered execution behavior. We also show that clock gating inactive lanes during divergence reduces dynamic power by 11.2%.
Self-stabilizing systems in spite of distributed control The synchronization task between loosely coupled cyclic sequential processes (as can be distinguished in, for instance, operating systems) can be viewed as keeping the relation “the system is in a legitimate state” invariant. As a result, each individual process step that could possibly cause violation of that relation has to be preceded by a test deciding whether the process in question is allowed to proceed or has to be delayed. The resulting design is readily—and quite systematically—implemented if the different processes can be granted mutually exclusive access to a common store in which “the current system state” is recorded.
Barrier certificates for nonlinear model validation Methods for model validation of continuous-time nonlinear systems with uncertain parameters are presented in this paper. The methods employ functions of state-parameter-time, termed barrier certificates, whose existence proves that a model and a feasible parameter set are inconsistent with some time-domain experimental data. A very large class of models can be treated within this framework; this includes differential-algebraic models, models with memoryless/dynamic uncertainties, and hybrid models. Construction of barrier certificates can be performed by convex optimization, utilizing recent results on the sum of squares decomposition of multivariate polynomials.
Cache Games -- Bringing Access-Based Cache Attacks on AES to Practice Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process.
A dynamically reconfigurable system-on-a-chip architecture for future mobile digital signal processing The evolving of current and future broadband access techniques into the wireless domain introduces new and flexible network architectures with difficult and interesting challenges. The system designers are faced with a challenging set of problems that stem from access mechanisms, energy conservation, error rate, transmission speed characteristics of the wireless links and mobility aspects. This paper presents first the major challenges in realizing flexible microelectronic system solutions for digital baseband signal processing in future mobile communication applications. Based thereupon, the architecture design of flexible system-on-a-chip solutions is discussed. The focus of the paper is the introduction of a new parallel and dynamically reconfigurable hardware architecture tailored to this application area. Its performance issues and potential are discussed by the implementation of a flexible and computation-intensive component of future mobile terminals.
OpenIoT: An open service framework for the Internet of Things The Internet of Things (IoT) has been a hot topic for the future of computing and communication. It will not only have a broad impact on our everyday life in the near future, but also create a new ecosystem involving a wide array of players such as device developers, service providers, software developers, network operators, and service users. In this paper, we present an open service framework for the Internet of Things, facilitating entrance into the IoT-related mass market, and establishing a global IoT ecosystem with the worldwide use of IoT devices and softwares. We expect that the open IoT service framework we proposed will play an important role in the widespread adoption of the Internet of Things in our everyday life, enhancing our quality of life with a large number of innovative applications and services, but also offering endless opportunities to all of the stakeholders in the world of information and communication technologies.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.05
0.05
0.05
0.05
0.05
0.02
0
0
0
0
0
0
0
0
Coordinated Energy Dispatch of Autonomous Microgrids with Distributed MPC Optimization With the increased penetration of renewable energy sources (RESs) and plug-and-play loads, Microgrids (MGs) bring direct challenges in energy management due to the uncertainties in both supply and demand sides. In this paper, we present a coordinated energy dispatch based on Distributed Model Predictive Control (DMPC), where the upper level provides an optimal scheduling for energy exchange between Distribution Network Operator (DNO) and MGs, whereas the lower level guarantees a satisfactory tracking between supply and demand. With the proposed scheme, not only we maintain a supply–demand balance in an economic way, but also improve the renewable energy utilization of distributed MG systems. To describe the dynamic process of energy trading, a novel conditional probability distribution model is introduced, which can characterize randomness of charging/discharging and uncertainties of energy dispatch. Moreover, we formulate a two-layer optimization problem and the corresponding algorithm is given. Finally, simulation results show the effectiveness of the proposed method.
Bidding Strategy for Microgrid in Day-Ahead Market Based on Hybrid Stochastic/Robust Optimization This paper proposes an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG, and price responsive loads. The microgrid coordinates the energy consumption or production of its components, and trades electricity in both day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, and day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total cost of operation minus total benefit of demand. This formulation can be solved by mixed-integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a photovoltaic panel, a fuel cell, a micro-turbine, a diesel generator, a battery, and a responsive load show the advantage of stochastic optimization, as well as robust optimization.
CVaR-Constrained Optimal Bidding of Electric Vehicle Aggregators in Day-Ahead and Real-Time Markets. An electric vehicle aggregator (EVA) that manages geographically dispersed electric vehicles offers an opportunity for the demand side to participate in electricity markets. This paper proposes an optimization model to determine the day-ahead inflexible bidding and real-time flexible bidding under market uncertainties. Based on the relationship between market price and bid price, the proposed opti...
A Multi-Agent Reinforcement Learning-Based Data-Driven Method for Home Energy Management. This paper proposes a novel framework for home energy management (HEM) based on reinforcement learning in achieving efficient home-based demand response (DR). The concerned hour-ahead energy consumption scheduling problem is duly formulated as a finite Markov decision process (FMDP) with discrete time steps. To tackle this problem, a data-driven method based on neural network (NN) and Q-learning a...
Multi-Agent Based Transactive Energy Management Systems for Residential Buildings with Distributed Energy Resources Proper management of building loads and distributed energy resources (DER) can offer grid assistance services in transactive energy (TE) frameworks besides providing cost savings for the consumer. However, most TE models require building loads and DER units to be managed by external entities (e.g., aggregators), and in some cases, consumers need to provide critical information related to their ele...
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Computing size-independent matrix problems on systolic array processors A methodology to transform dense to band matrices is presented in this paper. This transformation, is accomplished by triangular blocks partitioning, and allows the implementation of solutions to problems with any given size, by means of contraflow systolic arrays, originally proposed by H.T. Kung. Matrix-vector and matrix-matrix multiplications are the operations considered here.The proposed transformations allow the optimal utilization of processing elements (PEs) of the systolic array when dense matrix are operated. Every computation is made inside the array by using adequate feedback. The feedback delay time depends only on the systolic array size.
A 12 bit 2.9 GS/s DAC With IM3 $ ≪ -$ 60 dBc Beyond 1 GHz in 65 nm CMOS A 12 bit 2.9 GS/s current-steering DAC implemented in 65 nm CMOS is presented, with an IM3 < ¿-60 dBc beyond 1 GHz while driving a 50 ¿ load with an output swing of 2.5 Vppd and dissipating a power of 188 mW. The SFDR measured at 2.9  GS/s is better than 60 dB beyond 340 MHz while the SFDR measured at 1.6 GS/s is better than 60 dB beyond 440 MHz. The increase in performance at high-frequencies, co...
The M-Machine multicomputer The M-Machine is an experimental multicomputer being developed to test architectural concepts motivated by the constraints of modern semiconductor technology and the demands of programming systems. The M-Machine computing nodes are connected with a 3-D mesh network; each node is a multithreaded processor incorporating 9 function units, on-chip cache, and local memory. The multiple function units are used to exploit both instruction-level and thread-level parallelism. A user accessible message passing system yields fast communication and synchronization between nodes. Rapid access to remote memory is provided transparently to the user with a combination of hardware and software mechanisms. This paper presents the architecture of the M-Machine and describes how its mechanisms attempt to maximize both single thread performance and overall system throughput. The architecture is complete and the MAP chip, which will serve as the M-Machine processing node, is currently being implemented.
SPONGENT: a lightweight hash function This paper proposes spongent - a family of lightweight hash functions with hash sizes of 88 (for preimage resistance only), 128, 160, 224, and 256 bits based on a sponge construction instantiated with a present-type permutation, following the hermetic sponge strategy. Its smallest implementations in ASIC require 738, 1060, 1329, 1728, and 1950 GE, respectively. To our best knowledge, at all security levels attained, it is the hash function with the smallest footprint in hardware published so far, the parameter being highly technology dependent. spongent offers a lot of flexibility in terms of serialization degree and speed. We explore some of its numerous implementation trade-offs. We furthermore present a security analysis of spongent. Basing the design on a present-type primitive provides confidence in its security with respect to the most important attacks. Several dedicated attack approaches are also investigated.
MicroGP—An Evolutionary Assembly Program Generator This paper describes 驴GP, an evolutionary approach for generating assembly programs tuned for a specific microprocessor. The approach is based on three clearly separated blocks: an evolutionary core, an instruction library and an external evaluator. The evolutionary core conducts adaptive population-based search. The instruction library is used to map individuals to valid assembly language programs. The external evaluator simulates the assembly program, providing the necessary feedback to the evolutionary core. 驴GP has some distinctive features that allow its use in specific contexts. This paper focuses on one such context: test program generation for design validation of microprocessors. Reported results show 驴GP being used to validate a complex 5-stage pipelined microprocessor. Its induced test programs outperform an exhaustive functional test and an instruction randomizer, showing that engineers are able to automatically obtain high-quality test programs.
The accelerator store: A shared memory framework for accelerator-based systems This paper presents the many-accelerator architecture, a design approach combining the scalability of homogeneous multi-core architectures and system-on-chip's high performance and power-efficient hardware accelerators. In preparation for systems containing tens or hundreds of accelerators, we characterize a diverse pool of accelerators and find each contains significant amounts of SRAM memory (up to 90&percnt; of their area). We take advantage of this discovery and introduce the accelerator store, a scalable architectural component to minimize accelerator area by sharing its memories between accelerators. We evaluate the accelerator store for two applications and find significant system area reductions (30&percnt;) in exchange for small overheads (2&percnt; performance, 0&percnt;--8&percnt; energy). The paper also identifies new research directions enabled by the accelerator store and the many-accelerator architecture.
Variable Off-Time Control Loop for Current-Mode Floating Buck Converters in LED Driving Applications A versatile controller architecture, used in current-mode floating buck converters for LED driving, is developed. State-of-the-art controllers rely on a fixed switching period and variable duty cycle, focusing on current averaging circuits. Instead, the proposed controller architecture is based on fixed peak current and adaptable off time as the average current control method. The control loop is comprised of an averaging block, transconductance amplifier, and an innovative time modulator. This modulator is intended to provide constant control loop response regardless of input voltage, current storage inductor, and number of LEDs in order to improve converter applicability for LED drivers. Fabricated in a 5 V standard 0.5 μm CMOS technology, the prototype controller is implemented and tested in a current-mode floating buck converter. The converter exhibits sound continuous conduction mode (CCM) operation for input voltages between 11 and 20 V, and a wide inductor range of 100-1000 μH. In all instances, the measured average LED current variation was lower than 10% of the desired value. A maximum conversion efficiency of 91% is obtained when driving 50 mA through four LEDs (with 14 V input voltage and an inductor of 470 μH). A stable CCM converter operation is also proven by simulation for nine LEDs and 45 V input voltage.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
0
Pseudo Predictor Feedback Stabilization of Linear Systems with Both State and Input Delays This paper is concerned with stabilization of linear systems with both input delay and state delay, by utilizing the predictor based delay compensation method. The future dynamics of system are predicted by the proposed pseudo predictor feedback (PPF) control scheme. It is proved that the stability of the time-delay system under the PPF controller is equivalent to the stability of a corresponding integral delay system. The proposed method is also adopted for the stabilization of time-varying time-delay systems. A numerical example is carried out to illustrate the effectiveness of the proposed approach.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Continuous–discrete adaptive observers for state affine systems The observation of a class of multi-input multi-output (MIMO) state affine systems with constant unknown parameters and discrete time output measurements is addressed. Assuming some persistent excitation conditions to hold and the sampling steps to satisfy some boundedness hypotheses, system observability is ensured and a class of global exponential observers is synthesized.
Construction of interval observers for continuous-time systems with discrete measurements. We consider continuous-time systems with input, output and additive disturbances in the particular case where the measurements are only available at discrete instants and have disturbances. To solve a state estimation problem, we construct continuous–discrete interval observers that are asymptotically stable in the absence of disturbances. These interval observers are composed of two copies of the studied system and of a framer, accompanied with appropriate outputs which give, componentwise, upper and lower bounds for the solutions of the studied system.
Continuous-Discrete Observers for Time-Varying Nonlinear Systems: A Tutorial on Recent Results.
A New Approach to the Internally Positive Representation of Linear MIMO Systems The problem of representing linear systems through combination of positive systems is relevant when signal processing schemes, such as filters, state observers, or control laws, are to be implemented using “positive” technologies, such as Charge Routing Networks and fiber optic filters. This problem, well investigated in the SISO case, can be recasted into the more general problem of Internally Positive Representation (IPR) of systems. This paper presents a methodology for the construction of such IPRs for MIMO systems, based on a suitable convex positive representation of complex vectors and matrices. The stability properties of the IPRs are investigated in depth, achieving the important result that any stable system admits a stable IPR of finite dimension. A main algorithm and three variants, all based on the proposed methodology, are presented for the construction of stable IPRs. All of them are straightforward and are characterized by a very low computational cost. The first and second may require a large state-space dimension to provide a stable IPR, while the third and the fourth are aimed to provide stable IPRs of reduced order.
Design of a continuous-discrete observer for state affine systems In many engineering control problems, the output measurements are discrete-time ones. In this paper, we show how we can design an observer synthesis for continuous-time affine systems using discrete time output measurements.
Global Exponential Sampled-Data Observers for Nonlinear Systems with Delayed Measurements This paper presents new results concerning the observer design for certain classes of nonlinear systems with both sampled and delayed measurements. By using a small gain approach we provide sufficient conditions, which involve both the delay and the sampling period, ensuring exponential convergence of the observer system error. The proposed observer is robust with respect to measurement errors and perturbations of the sampling schedule. Moreover, new results on the robust global exponential state predictor design problem are provided, for wide classes of nonlinear systems.
Robust compensation of a chattering time-varying input delay We investigate the design of a prediction-based controller for a linear system subject to a time-varying input delay, not necessarily causal. This means that the information feeding the system can be older than ones previously received. We propose to use the current delay value in the prediction employed in the control law. Modeling the input delay as a transport Partial Differential Equation, we prove asymptotic tracking of the system state, providing that the average ℒ2-norm of the delay time-derivative is sufficiently small. This result is obtained by generalizing Halanay inequality to time-varying differential inequalities.
Input delay compensation of linear systems with both state and input delays by adding integrators This paper studies stabilization of linear systems with both state and input delays. A dynamic input-delay compensator obtained by adding integrators is established to compensate the input delays that can be arbitrarily large. With the input delay compensator, the original stabilization problem reduces to the problem of stabilizing an augmented linear time-delay system without input delay. Three methods are also proposed to design stabilizing controllers for the augmented linear time-delay system. The first method is based on linear matrix inequalities (LMIs) and the second method is based on model reduction. The third method is based on pole placement and is built for the particular case that the original time-delay system has only a pure delayed state vector on its right hand side. For this method, the optimal gain such that the decay rate of the closed-loop system is maximized is also proposed. The effectiveness of the proposed approaches is illustrated by three linear time-delay systems that are open-loop unstable.
Chains of recurrences—a method to expedite the evaluation of closed-form functions Chains of Recurrences (CR's) are introduced as an effective method to evaluate functions at regular intervals. Algebraic properties of CR's are examined and an algorithm that constructs a CR for a given function is explained. Finally, an implementation of the method in MAXIMA/Common Lisp is discussed.
Dynamic spectrum access in open spectrum wireless networks One of the reasons for the limitation of bandwidth in current generation wireless networks is the spectrum policy of the Federal Communications Commission (FCC). But, with the spectrum policy reform, open spectrum wireless networks, and spectrum agile radios are set to drive next general wireless networks. In this paper, we investigate continuous-time Markov models for dynamic spectrum access in open spectrum wireless networks. Both queueing and no queueing cases are considered. Analytical results are derived based on the Markov models. A random access protocol is proposed that is shown to achieve airtime fairness. A distributed version of this protocol that uses only local information is also proposed based on homo egualis anthropological model. Inequality aversion by the radio systems to achieve fairness is captured by this model. These protocols are then extended to spectrum agile radios. Extensive simulation results are presented to compare the performances of fixed versus agile radios.
A dynamic analysis of the Dickson charge pump circuit Dynamics of the Dickson charge pump circuit are analyzed. The analytical results enable the estimation of the rise time of the output voltage and that of the power consumption during boosting. By using this analysis, the optimum number of stages to minimize the rise time has been estimated as 1.4 N/sub min/, where N/sub min/ is the minimum value of the number of stages necessary for a given parame...
Design Aspects of an Active Electromagnetic Suspension System for Automotive Applications. This paper is concerned with the design aspects of an active electromagnet suspension system for automotive appli- cations which combines a brushless tubular permanent magnet actuator (TPMA) with a passive spring. This system provides for additional stability and safety by performing active roll and pitch control during cornering and braking. Furthermore, elimination of the road irregularities is possible, hence passenger drive comfort is increased. Based upon measurements, static and dynamic specifications of the actuator are derived. The electro magnetic suspension is installed on a quarter car test setup, and the improved performance using roll control is measured and compared to a commercial passive system. An alternative design using a slotless external magnet tubular actuator is proposed which fulfills the derived performance, thermal and volume specifications.
3.4 A 36Gb/s PAM4 transmitter using an 8b 18GS/S DAC in 28nm CMOS At data rates beyond 10Gb/s, most wireline links employ NRZ signaling. Serial NRZ links as high as 56Gb/s and 60Gb/s have been reported [1]. Nevertheless, as the rate increases, the constraints imposed by the channel, package, and die become more severe and do not benefit from process scaling in the same fashion that circuit design does. Reflections from impedance discontinuities in the PCB and package caused by vias and connectors introduce significant signal loss and distortions at higher frequencies. Even with an ideal channel, at every package-die interface, there is an intrinsic parasitic capacitance due to the pads and the ESD circuit amounting to at least 150fF, and a 50Ω resistor termination at both the transmit and receive ends resulting in an intrinsic pole at 23GHz or lower. In light of all these limitations, serial NRZ signaling beyond 60Gb/s appears suboptimal in terms of both power and performance. Utilizing various modulation techniques such as PAM4, one can achieve a higher spectral efficiency [2]. To enable such transmission formats, high-speed moderate-resolution data converters are required. This paper describes a 36Gb/s transmitter based on an 18GS/s 8b DAC implemented in 28nm CMOS, compliant to the new IEEE802.3bj standard for 100G Ethernet over backplane and copper cables [3].
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.051584
0.0517
0.0517
0.027657
0.020627
0.013359
0.000868
0.000008
0
0
0
0
0
0
Software radio issues in cellular base stations The use of the “software radio” concept in cellular applications is a topic of widespread interest. Two key issues in the implementation of software radios are the development of optimal receivers that require the minimum number of bits in the wide-band analog-to-digital converter (ADC) and efficient channelizers that extract individual channels from the digitized wide-band signal. In this paper, both of these issues are studied in detail for cellular base stations. A computationally efficient wide-band channelizer is presented. This channelizer is closely related to the discrete Fourier transform filter bank used in transmultiplexers. It is shown that the complexity of the proposed channelizer is significantly less (2-50×) than the complexity of conventional channelizers. An optimal receiver that explicitly takes into account the effect of the quantization noise of the wide-band ADC is also derived. The analysis of the ADC noise provides guidelines for specifying wide-band ADC for use in cellular applications. The development of the channelizer and the optimal receiver yield important insights into the implementation of cellular software radios. All of the key results of this paper are applied to a detailed example based on the Digital Advanced Mobile Phone System (D-AMPS, IS-54/IS-136) cellular standard. The bit-error rate (BER) performance simulations of a D-AMPS wide band receiver is presented as a part of this example
Advanced base station technology The authors present an overview of advances in three areas including software radio, adaptive antenna technology, and high-temperature superconductivity as currently envisioned for use in advanced cellular base stations. The conclusion broadly drawn is that the advancement in DSP and ASIC technologies will provide the major contribution in making these technologies technically and commercially realistic
Efficient filterbank channelizers for software radio receivers For cellular software radio receivers, this paper presents a computationally efficient algorithm for extracting individual radio channels from the output of the wideband A/D converter. In a software radio, the extraction of individual channels from the output of the wide band A/D converter is by far the most computationally demanding task; hence, it is very important to devise computationally efficient algorithms for this task. Our algorithm is obtained by modifying the DFT filter bank structure that is well known in the multi-rate signal processing literature (Scheuermann and Gockler, 1981; Vaidyanathan, 1990). We show that the complexity of the proposed algorithm is significantly less (2×-50×) than the complexity of the conventional channelizers
A multistage filterbank-based channelizer and its multiplier-less realization This paper proposes a multistage filterbank-based channelizer for software radio base stations. The proposed channelizer is capable of receiving channels with potentially different bandwidths as required in a multi-standard cellular based station. It consists of multiple stages of DFT filter banks and efficient sample rate changers. The front-end DFT filter bank of the channelizer has a fixed number of channels but the passband supports overlap with each other. The received signals selected by a given output of this filter bank are fed into sample rate changers so that they can fit into the fixed channel spacing of the DFT filter banks in the following stages. Due to the lowered sample rate, these back-end DFT filter banks can have either fixed or variable number of channels. Repeatedly using this multistage architecture, channels with different bandwidths can be isolated. The design and implementation of the proposed channelizer are discussed in detail. An example of dual-mode GSM/W-CDMA channelizer is also discussed to illustrate the proposed design methodology.
Efficient wideband channelizer for software radio systems using modulated PR filterbanks An efficient method is proposed for channelizing frequency division multiplexed (FDM) channels in wideband software radio (SWR) received signals that do not satisfy the conditions required for polyphase decomposition of the discrete filterbank (DFB) channelizer. The proposed method, which uses modulated perfect reconstruction (PR) filterbanks, requires fewer computations than DFBs for channelizing wideband signals that are composed of FDM channels of nonequal bandwidths, especially when a large number of channels are extracted. The proposed channelizer, if applied in the reverse direction, can be used to synthesize a set of channels with nonequal bandwidths into a single wideband signal in SWR transmitters. A method is also proposed for efficiently designing the modulated PR filterbanks, which have a large number of subchannels and prototype filters with high stopband attenuations that are used in the proposed channelizer. The computational complexity of the proposed channelizer is compared with the complexity of the DFB channelizer for channelizing the wideband and high-dynamic-range signals that are typical of SWR systems, and simulation results of the proposed channelization method are discussed.
Direct bandpass sampling of multiple distinct RF signals A goal in the software radio design philosophy is to place the analog-to-digital converter as near the antenna as possible. This objective has been demonstrated for the case of a single input signal. Bandpass sampling has been applied to downconvert, or intentionally alias, the information bandwidth of a radio frequency (RF) signal to a desired intermediate frequency. The design of the software radio becomes more interesting when two or more distinct signals are received. The traditional ap- proach for multiple signals would be to bandpass sample a continuous span of spectrum containing all the desired signals. The disadvantage with this approach is that the sampling rate and associated discrete processing rate are based on the span of spectrum as opposed to the information bandwidths of the signals of interest. Proposed here is a technique to determine the absolute min- imum sampling frequency for direct digitization of multiple, nonadjacent, frequency bands. The entire process is based on the calculation of a single parameter—the sampling frequency. The result is a simple, yet elegant, front-end design for the reception and bandpass sampling of multiple RF signals. Experimental results using RF transmissions from the U.S. Global Positioning System—Standard Position Service (GPS-SPS) and the Russian Global Navigation Satellite System (GLONASS) are used to illustrate and verify the theory.
Direct downconversion of multiband RF signals using bandpass sampling Abstract-Bandpass sampling can be used by radio receivers to directly digitize the radio frequency (RF) signals. Although the bandpass sampling theory for single-band RF signals is well established, its counterpart for multiband RF signals is relatively immature. In this paper, we propose a novel and efficient method to find the ranges of valid bandpass sampling frequency for direct downconverting multiband RF signals. Simple formulas for the ranges of valid bandpass sampling frequency in terms of the frequency locations of the multiple RF bands are derived. The result can be used to design a multiband receiver for software defined radios.
A Single–Chip 10-Band WCDMA/HSDPA 4-Band GSM/EDGE SAW-less CMOS Receiver With DigRF 3G Interface and ${+}$ 90 dBm IIP2 This paper describes the design and performance of a 90 nm CMOS SAW-less receiver with DigRF interface that supports 10 WCDMA bands (I, II, III, IV, V, VI, VIII, IX, X, XI) and 4 GSM bands (GSM850, EGSM900, DCS1800, PCS1900). The receiver is part of a single-chip SAW-less transceiver reference platform IC for mass-market smartphones, which has been designed to meet Category 10 HSDPA (High Speed Do...
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Memory errors: the past, the present, and the future Memory error exploitations have been around for over 25 years and still rank among the top 3 most dangerous software errors. Why haven't we been able to stop them? Given the host of security measures on modern machines, are we less vulnerable than before, and can we expect to eradicate memory error problems in the near future? In this paper, we present a quarter century worth of memory errors: attacks, defenses, and statistics. A historical overview provides insights in past trends and developments, while an investigation of real-world vulnerabilities and exploits allows us to answer on the significance of memory errors in the foreseeable future.
The Quadrature LC Oscillator: A Complete Portrait Based on Injection Locking We show that the quadrature LC oscillator is best treated as two strongly coupled, nominally identical oscillators that are locked to the same frequency. Differential equations that extend Adler&#39;s description of locking to strong injection reveal the full dynamics of this circuit. With a simplifying insight, the analysis reveals all the modes of the oscillator, their stability, the effects of mism...
Causality, influence, and computation in possibly disconnected synchronous dynamic networks In this work, we study the propagation of influence and computation in dynamic distributed computing systems that are possibly disconnected at every instant. We focus on a synchronous message-passing communication model with broadcast and bidirectional links. Our network dynamicity assumption is a worst-case dynamicity controlled by an adversary scheduler, which has received much attention recently. We replace the usual (in worst-case dynamic networks) assumption that the network is connected at every instant by minimal temporal connectivity conditions. Our conditions only require that another causal influence occurs within every time window of some given length. Based on this basic idea, we define several novel metrics for capturing the speed of information spreading in a dynamic network. We present several results that correlate these metrics. Moreover, we investigate termination criteria in networks in which an upper bound on any of these metrics is known. We exploit our termination criteria to provide efficient (and optimal in some cases) protocols that solve the fundamental counting and all-to-all token dissemination (or gossip) problems.
Kinesis: a security incident response and prevention system for wireless sensor networks This paper presents Kinesis, a security incident response and prevention system for wireless sensor networks, designed to keep the network functional despite anomalies or attacks and to recover from attacks without significant interruption. Due to the deployment of sensor networks in various critical infrastructures, the applications often impose stringent requirements on data reliability and service availability. Given the failure- and attack-prone nature of sensor networks, it is a pressing concern to enable the sensor networks provide continuous and unobtrusive services. Kinesis is quick and effective in response to incidents, distributed in nature, and dynamic in selecting response actions based on the context. It is lightweight in terms of response policy specification, and communication and energy overhead. A per-node single timer based distributed strategy to select the most effective response executor in a neighborhood makes the system simple and scalable, while achieving proper load distribution and redundant action optimization. We implement Kinesis in TinyOS and measure its performance for various application and network layer incidents. Extensive TOSSIM simulations and testbed experiments show that Kinesis successfully counteracts anomalies/attacks and behaves consistently under various attack scenarios and rates.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.014482
0.010509
0.010485
0.005243
0.002878
0.000448
0.000008
0
0
0
0
0
0
0
An Ultra High-Frequency 8-Channel Neurostimulator Circuit with 68% Peak Power Efficiency. In order to recruit neurons in excitable tissue, constant current neural stimulators are commonly used. Recently, ultra high-frequency (UHF) stimulation has been proposed and proven to have the same efficacy as constant-current stimulation [1]. This paper presents the design, integrated circuit (IC) implementation and measurement results of a power efficient multichannel UHF neural stimulator. The core of the neurostimulator is based on our previously proposed architecture of an inductor-based buck-boost DC-DC converter without the external output capacitor [2]. The ultimate goal of this work is to increase the power efficiency of the UHF stimulator for multiple-channel operation, while keeping the number of external components minimal. To this end, a number of novel approaches were employed in the integrated circuit design domain. More specifically, a novel zero-current detection scheme is proposed. It allows to remove the freewheel diode typically used in DC-DC converters to prevent current to flow back from the load to the inductor. Furthermore, a gate-driver circuit is implemented which allows the use of thin gate-oxide transistors as high-voltage switches. By doing so, the need for a high-voltage supply is eliminated and the stimulator is powered up from a 3.5V input voltage. Both the current detection technique and the gate driving circuit of the current implementation allow to boost the power efficiency up to 300% when compared to previous UHF stimulator works. A peak power efficiency of 68% is achieved. The circuit is implemented in a 0.18 μm HV process, and the total chip area is 3.65 <mm></2>.
Compact, Energy-Efficient High-Frequency Switched Capacitor Neural Stimulator With Active Charge Balancing. Safety and energy efficiency are two major concerns for implantable neural stimulators. This paper presents a novel high-frequency, switched capacitor (HFSC) stimulation and active charge balancing scheme, which achieves high energy efficiency and well-controlled stimulation charge in the presence of large electrode impedance variations. Furthermore, the HFSC can be implemented in a compact size w...
A Digitally Dynamic Power Supply Technique for 16-Channel 12 V-Tolerant Stimulator Realized in a 0.18- μm 1.8-V/3.3-V Low-Voltage CMOS Process. A new digitally dynamic power supply technique for 16-channel 12-V-tolerant stimulator is proposed and realized in a 0.18-μm 1.8-V/3.3-V CMOS process. The proposed stimulator uses four stacked transistors as the pull-down switch and pull-up switch to withstand 4 times the nominal supply voltage (4 × V DD). With the dc input voltage of 3.3 V, the regulated three-stage charge pump, which is capable ...
A Trimodal Wireless Implantable Neural Interface System-on-Chip A wireless and battery-less trimodal neural interface system-on-chip (SoC), capable of 16-ch neural recording, 8-ch electrical stimulation, and 16-ch optical stimulation, all integrated on a 5 × 3 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> chip fabricated in 0.35-μm standard CMOS process. The trimodal SoC is designed to be inductively powered and communicated. The downlink data telemetry utilizes on-off keying pulse-position modulation (OOK-PPM) of the power carrier to deliver configuration and control commands at 50 kbps. The analog front-end (AFE) provides adjustable mid-band gain of 55-70 dB, low/high cut-off frequencies of 1-100 Hz/10 kHz, and input-referred noise of 3.46 μV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">rms</sub> within 1 Hz-50 kHz band. AFE outputs of every two-channel are digitized by a 50 kS/s 10-bit SAR-ADC, and multiplexed together to form a 6.78 Mbps data stream to be sent out by OOK modulating a 434 MHz RF carrier through a power amplifier (PA) and 6 cm monopole antenna, which form the uplink data telemetry. Optical stimulation has a switched-capacitor based stimulation (SCS) architecture, which can sequentially charge four storage capacitor banks up to 4 V and discharge them in selected μLEDs at instantaneous current levels of up to 24.8 mA on demand. Electrical stimulation is supported by four independently driven stimulating sites at 5-bit controllable current levels in ±(25-775) μA range, while active/passive charge balancing circuits ensure safety. In vivo testing was conducted on four anesthetized rats to verify the functionality of the trimodal SoC.
A fully integrated low-power BPSK demodulator for implantable medical devices During the past decades, research has progressed on the biomedical implantable electronic devices that require power and data communication through wireless inductive links. In this paper, we present a fully integrated binary phase-shift keying (BPSK) demodulator, which is based on a hard-limited COSTAS loop topology, dedicated to such implantable medical devices. The experimental results of the proposed demodulator show a data transmission rate of 1.12 Mbps, less than 0.7 mW consumption under a supply voltage of 1.8 V, and silicon area of 0.2 mm2 in the Taiwan Semiconductor Manufacturing Company (TSMC) CMOS 0.18-μm technology. The transmitter satisfies the requirement of applications relative to high forward-transferring data rate, such as cortical stimulation. Moreover, the employment of BPSK demodulation along with a passive modulation method allows full-duplex data communication between an external controller and the implantable device, which may improve the controllability and observability of the overall implanted system.
A Minimally Invasive 64-Channel Wireless μECoG Implant Emerging applications in brain-machine interface systems require high-resolution, chronic multisite cortical recordings, which cannot be obtained with existing technologies due to high power consumption, high invasiveness, or inability to transmit data wirelessly. In this paper, we describe a microsystem based on electrocorticography (ECoG) that overcomes these difficulties, enabling chronic recording and wireless transmission of neural signals from the surface of the cerebral cortex. The device is comprised of a highly flexible, high-density, polymer-based 64-channel electrode array and a flexible antenna, bonded to 2.4 mm × 2.4 mm CMOS integrated circuit (IC) that performs 64-channel acquisition, wireless power and data transmission. The IC digitizes the signal from each electrode at 1 kS/s with 1.2 μV input referred noise, and transmits the serialized data using a 1 Mb/s backscattering modulator. A dual-mode power-receiving rectifier reduces data-dependent supply ripple, enabling the integration of small decoupling capacitors on chip and eliminating the need for external components. Design techniques in the wireless and baseband circuits result in over 16× reduction in die area with a simultaneous 3× improvement in power efficiency over the state of the art. The IC consumes 225 μW and can be powered by an external reader transmitting 12 mW at 300 MHz, which is over 3× lower than IEEE and FCC regulations.
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Broadband MIMO-OFDM Wireless Communications Orthogonal frequency division multiplexing (OFDM) is a popular method for high data rate wireless transmission. OFDM may be combined with antenna arrays at the transmitter and receiver to increase the diversity gain and/or to enhance the system capacity on time-varying and frequency-selective channels, resulting in a multiple-input multiple-output (MIMO) configuration. The paper explores various p...
Model predictive control: theory and practice—a survey We refer to Model Predictive Control (MPC) as that family of controllers in which there is a direct use of an explicit and separately identifiable model. Control design methods based on the MPC concept have found wide acceptance in industrial applications and have been studied by academia. The reason for such popularity is the ability of MPC designs to yield high performance control systems capable of operating without expert intervention for long periods of time. In this paper the issues of importance that any control system should address are stated. MPC techniques are then reviewed in the light of these issues in order to point out their advantages in design and implementation. A number of design techniques emanating from MPC, namely Dynamic Matrix Control, Model Algorithmic Control, Inferential Control and Internal Model Control, are put in perspective with respect to each other and the relation to more traditional methods like Linear Quadratic Control is examined. The flexible constraint handling capabilities of MPC are shown to be a significant advantage in the context of the overall operating objectives of the process industries and the 1-, 2-, and ∞-norm formulations of the performance objective are discussed. The application of MPC to non-linear systems is examined and it is shown that its main attractions carry over. Finally, it is explained that though MPC is not inherently more or less robust than classical feedback, it can be adjusted more easily for robustness.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Linear Amplification with Nonlinear Components A technique for producing bandpass linear amplification with nonlinear components (LINC) is described. The bandpass signal first is separated into two constant envelope component signals. All of the amplitude and phase information of the original bandpass signal is contained in phase modulation on the component signals. These constant envelope signals can be amplified or translated in frequency by amplifiers or mixers which have nonlinear input-output amplitude transfer characteristics. Passive linear combining of the amplified and/or translated component signals produces an amplified and/or translated replica of the original signal.
Recurrent-Fuzzy-Neural-Network-Controlled Linear Induction Motor Servo Drive Using Genetic Algorithms A recurrent fuzzy neural network (RFNN) controller based on real-time genetic algorithms (GAs) is developed for a linear induction motor (LIM) servo drive in this paper. First, the dynamic model of an indirect field-oriented LIM servo drive is derived. Then, an online training RFNN with a backpropagation algorithm is introduced as the tracking controller. Moreover, to guarantee the global convergence of tracking error, a real-time GA is developed to search the optimal learning rates of the RFNN online. The GA-based RFNN control system is proposed to control the mover of the LIM for periodic motion. The theoretical analyses for the proposed GA-based RFNN controller are described in detail. Finally, simulated and experimental results show that the proposed controller provides high-performance dynamic characteristics and is robust with regard to plant parameter variations and external load disturbance
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.24
0.24
0.24
0.06
0.016
0.003529
0
0
0
0
0
0
0
0
Using branch predictors to predict brain activity in brain-machine implants. A key problem with implantable brain-machine interfaces is that they need extreme energy efficiency. One way of lowering energy consumption is to use the low power modes available on the processors embedded in these devices. We present a technique to predict when neuronal activity of interest is likely to occur so that the processor can run at nominal operating frequency at those times, and be placed in low power modes otherwise. To achieve this, we discover that branch predictors can also predict brain activity. We perform brain surgeries on awake and anesthetized mice, and evaluate the ability of several branch predictors to predict neuronal activity in the cerebellum. We find that perceptron branch predictors can predict cerebellar activity with accuracies as high as 85%. Consequently, we co-opt branch predictors to dictate when to transition between low power and normal operating modes, saving as much as 59% of processor energy.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Analysis of timing jitter in CMOS ring oscillators in this paper the effects of thermal noise in transistors on timing jitter in CMOS ring-oscillators composed of source-coupled differential resistively-loaded delay cells is investigated. The relationship between delay element design parameters and the inherent thermal noise-induced jitter of the generated waveform are analyzed. These results are compared with simulated results from a Monte-Carlo analysis with good agreement. The analysis shows that timing jitter is inversely proportional to the square root of the total capacitance at the output of each inverter, and inversely proportional to the gate-source bias voltage above threshold of the source-coupled devices in the balanced state. Furthermore, these dependencies imply an inverse relationship between jitter and power consumption for an oscillator with fixed output period. Phase noise and timing jitter performance are predicted to improve at a rate of 10 dB per decade increase in power consumption
A low voltage auto-reconfigured power-on-reset/bandgap reference circuit In this paper, a low voltage auto-reconfigured power-on-reset/bandgap reference circuit is proposed. During initial power up, the circuit utilizes a transconuductor and a bandgap core to detect the power-up supply voltage level (VTPU) precisely with minimum temperature and process dependence. No precise reference voltage is required. After a power-up signal is issued, the circuit is reconfigured into a bandgap circuit with a precise reference output voltage (Vref) for the use of other circuits. Based on a 0.18μm CMOS process, a VTPU of 999.5mV with variations within +0.18% and -0.15% for different process corners and temperatures (-20°C to +120°C) was obtained. After power up, a Vref of 599.4mV with variations within +0.15% and -0.46% was achieved.
Short-Range Low-Data-Rate FM-UWB Transceivers: Overview, Analysis, and Design. This paper summarizes and compares various circuit configurations of sub-modules and RF front-ends for frequency modulated ultra-wideband (FM-UWB), and analyzes the transceiver design parameters and link margin. High-robust relaxation oscillator for subcarrier generation, low-power ring oscillators for RF FM, automatic frequency calibration (AFC) for system robustness, and preamplifier-latch based...
A High-Precision Resistor-Less CMOS Compensated Bandgap Reference Based on Successive Voltage-Step Compensation. A curvature-compensated resistor-less bandgap reference (BGR), which is fabricated in 0.5-μm CMOS process, is proposed in this paper. The BGR utilizes successive voltagestep compensation to produce a temperature-insensitive voltage reference (VR), including one AVGS step for first-order compensation and another one for higher order curvature correction. Moreover, a supply noise bypassing technique...
A Fully Integrated Wideband FM Transceiver for Low Data Rate Autonomous Systems A frequency-agile FM-UWB transceiver (Tx/Rx) with full on-chip calibration aimed at low data rate autonomous wireless sensing applications is described. The subcarrier VCO, 3-phase CCO, and frequency-tripling PA in the transmit path produce a wideband, double-FM output at 10.1 dBm-pk (FCC compliant). A tunable LNA, envelope detector, limiter, and FSK demodulator comprise the receiver. Digitally programmable matching networks at the PA output and LNA input facilitate independent tuning of Tx and Rx across the 3–5 GHz band. An on-chip SAR-FLL controlling 5 DACs (3 I-DACs and 2 C-DACs) performs a full Tx/Rx calibration in less than 2 ms. Designed for continuous operation at 100 kb/s, measured Rx sensitivity is 80.5 dBm (10 BER), and average Tx/Rx energy efficiency is 6 nJ/bit. Total dissipation for the 0.9 mm IC implemented in 90 nm RF-CMOS is 630 µW in Tx and 580 µW in Rx mode from a 1 V supply.
An All-Digital 12 pJ/Pulse IR-UWB Transmitter Synthesized From a Standard Cell Library. This paper presents an all-digital impulse radio ultra-wideband (IR-UWB) transmitter. All functional blocks in the transmitter are implemented with digital standard cells and automatically place-and-routed by design tools. The center frequency and the bandwidth of the UWB pulses are digitally tuned to compensate for variations, or target different applications. This paper also proposes a calibrati...
0.56 V, –20 dBm RF-Powered, Multi-Node Wireless Body Area Network System-on-a-Chip With Harvesting-Efficiency Tracking Loop A battery-less, multi-node wireless body area network (WBAN) system-on-a-chip (SoC) is demonstrated. An efficiency tracking loop is proposed that adjusts the rectifier's threshold voltage to maximize the wireless harvesting operation, resulting in a minimum RF sensitivity better than -20 dBm at 904.5 MHz. Each SoC node is injection-locked and time-synchronized with the broadcasted RF basestation power (up to a sensitivity of -33 dBm) using an injection-locked frequency divider (ILFD). Hence, every sensor node is phase-locked with the basestation and all nodes can wirelessly transmit TDMA sensor data concurrently. Designed in a 65 nm-CMOS process, the fabricated sensor SoC contains the energy harvesting rectifier and bandgap, duty-cycled ADC, digital logic, as well as the multi-node wireless clock synchronization and MICS-band transmitter. For a broadcasted basestation power of 20 dBm (30 dBm), experimental measurements verify correct powering, sensor reading, and wireless data transfer for a distance of 3 m (9 m). The entire biomedical system application is verified by reception of room and abdominal temperature monitoring.
Class-C Harmonic CMOS VCOs, With a General Result on Phase Noise A harmonic oscillator topology displaying an improved phase noise performance is introduced in this paper. Exploiting the advantages yielded by operating the core transistors in class-C, a theoretical 3.9 dB phase noise improvement compared to the standard differential-pair LC-tank oscillator is achieved for the same current consumption. Further benefits derive from the natural rejection of the tail bias current noise, and from the absence of parasitic nodes sensitive to stray capacitances. Closed-form phase-noise equations obtained from a rigorous time-variant circuit analysis are presented, as well as a time-variant study of the stability of the oscillation amplitude, resulting in simple guidelines for a reliable design. Furthermore, the analysis of phase noise is extended to encompass a general harmonic oscillator, showing that all phase noise relations previously obtained for specific LC oscillator topologies are special cases of a very general and remarkably simple result.
Timing Recovery in Digital Synchronous Data Receivers A new class of fast-converging timing recovery methods for synchronous digital data receivers is investigated. Starting with a worst-case timing offset, convergence with random binary data will typically occur within 10-20 symbols. The input signal is sampled at the baud rate; these samples are then processed to derive a suitable control signal to adjust the timing phase. A general method is outlined to obtain near-minimum-variance estimates of the timing offset with respect to a given steady-state sampling criterion. Although we make certain independence assumptions between successive samples and postulate ideal decisions to obtain convenient analytical results, our simulations with a decision-directed reference and baud-to-baud adjustments yield very similar results. Convergence is exponential, and for small loop gains the residual jitter is proportional and convergence time is inversely proportional to the loop gain. The proposed algorithms are simple and economic to implement. They apply to binary or multilevel PAM signals as well as to partial response signals.
An intelligent power amplifier MMIC using a new adaptive bias control circuit for W-CDMA applications A high-linearity and high-efficiency MMIC power amplifier is proposed that adopts a new on-chip adaptive bias con- trol circuit, which simultaneously improves efficiency at the low output power level and linearity at the high output power level. The adaptive bias control circuit detects the input power level and sup- plies a low quiescent current of 16 mA at the low output power level and an increased current up to 90 mA according to the increased power level adaptively. The intelligent W-CDMA power amplifier using the adaptive bias circuit exhibits an improvement of average power usage efficiency of more than 1.93 times, and an adjacent channel leakage ratio by 4 dB at the output power of 28.3 dBm. Index Terms—Adaptive bias control, heterojunction bipolar transistor (HBT), high efficiency, high linearity, MMIC, power amplifier, wide-band code-division multiple access (W-CDMA).
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
3-D Technology Assessment: Path-Finding the Technology/Design Sweet-Spot It is widely acknowledged that three-dimensional (3-D) technologies offer numerous opportunities for system design. In recent years, significant progress has been made on these 3-D technologies, and they have become probably the best hope for carrying the semiconductor industry beyond the path of Moore&#39;s law. However, a clear roadmap is missing to successfully introduce this 3-D technology onto th...
Nebula: Distributed Edge Cloud for Data Intensive Computing Centralized cloud infrastructures have become the de-facto platform for data-intensive computing today. However, they suffer from inefficient data mobility due to the centralization of cloud resources, and hence, are highly unsuited for dispersed-data-intensive applications, where the data may be spread at multiple geographical locations. In this paper, we present Nebula: a dispersed cloud infrastructure that uses voluntary edge resources for both computation and data storage. We describe the lightweight Nebula architecture that enables distributed data-intensive computing through a number of optimizations including location-aware data and computation placement, replication, and recovery. We evaluate Nebula's performance on an emulated volunteer platform that spans over 50 PlanetLab nodes distributed across Europe, and show how a common data-intensive computing framework, MapReduce, can be easily deployed and run on Nebula. We show Nebula MapReduce is robust to a wide array of failures and substantially outperforms other wide-area versions based on a BOINC like model.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1.032922
0.025
0.025
0.025
0.014996
0.006371
0.000185
0.000091
0.000028
0.000001
0
0
0
0
Distributed Detection : Finite-time Analysis and Impact of Network Topology. This paper addresses the problem of distributed detection in multi-agent networks. Agents receive private signals about an unknown state of the world. The underlying state is globally identifiable, yet informative signals may be dispersed throughout the network. Using an optimization-based framework, we develop an iterative local strategy for updating individual beliefs. In contrast to the existing literature which focuses on asymptotic learning, we provide a finite-time analysis. Furthermore, we introduce a Kullback-Leibler cost to compare the efficiency of the algorithm to its centralized counterpart. Our bounds on the cost are expressed in terms of network size, spectral gap, centrality of each agent and relative entropy of agents' signal structures. A key observation is that distributing more informative signals to central agents results in a faster learning rate. Furthermore, optimizing the weights, we can speed up learning by improving the spectral gap. We also quantify the effect of link failures on learning speed in symmetric networks. We finally provide numerical simulations for our method which verify our theoretical results.
Distributed estimation and quantization An algorithm is developed for the design of a nonlinear, n-sensor, distributed estimation system subject to communication and computation constraints. The algorithm uses only bivariate probability distributions and yields locally optimal estimators that satisfy the required system constraints. It is shown that the algorithm is a generalization of the classical Lloyd-Max results
Multi-sensor optimal information fusion Kalman filter This paper presents a new multi-sensor optimal information fusion criterion weighted by matrices in the linear minimum variance sense, it is equivalent to the maximum likelihood fusion criterion under the assumption of normal distribution. Based on this optimal fusion criterion, a general multi-sensor optimal information fusion decentralized Kalman filter with a two-layer fusion structure is given for discrete time linear stochastic control systems with multiple sensors and correlated noises. The first fusion layer has a netted parallel structure to determine the cross covariance between every pair of faultless sensors at each time step. The second fusion layer is the fusion center that determines the optimal fusion matrix weights and obtains the optimal fusion filter. Comparing it with the centralized filter, the result shows that the computational burden is reduced, and the precision of the fusion filter is lower than that of the centralized filter when all sensors are faultless, but the fusion filter has fault tolerance and robustness properties when some sensors are faulty. Further, the precision of the fusion filter is higher than that of each local filter. Applying it to a radar tracking system with three sensors demonstrates its effectiveness.
Optimal dimensionality reduction of sensor data in multisensor estimation fusion When there exists the limitation of communication bandwidth between sensors and a fusion center, one needs to optimally precompress sensor outputs-sensor observations or estimates before the sensors' transmission in order to obtain a constrained optimal estimation at the fusion center in terms of the linear minimum error variance criterion, or when an allowed performance loss constraint exists, one needs to design the minimum dimension of sensor data. This paper will answer the above questions by using the matrix decomposition, pseudo-inverse, and eigenvalue techniques.
On the cover time and mixing time of random geometric graphs The cover time and mixing time of graphs has much relevance to algorithmic applications and has been extensively investigated. Recently, with the advent of ad hoc and sensor networks, an interesting class of random graphs, namely random geometric graphs, has gained new relevance and its properties have been the subject of much study. A random geometric graph G(n,r) is obtained by placing n points uniformly at random on the unit square and connecting two points iff their Euclidean distance is at most r. The phase transition behavior with respect to the radius r of such graphs has been of special interest. We show that there exists a critical radius r"o"p"t such that for any r=r"o"p"tG(n,r) has optimal cover time of @Q(nlogn) with high probability, and, importantly, r"o"p"t=@Q(r"c"o"n) where r"c"o"n denotes the critical radius guaranteeing asymptotic connectivity. Moreover, since a disconnected graph has infinite cover time, there is a phase transition and the corresponding threshold width is O(r"c"o"n). On the other hand, the radius required for rapid mixing r"r"a"p"i"d=@w(r"c"o"n), and, in particular, r"r"a"p"i"d=@Q(1/poly(logn)). We are able to draw our results by giving a tight bound on the electrical resistance and conductance of G(n,r) via certain constructed flows.
Mirror descent and nonlinear projected subgradient methods for convex optimization The mirror descent algorithm (MDA) was introduced by Nemirovsky and Yudin for solving convex optimization problems. This method exhibits an efficiency estimate that is mildly dependent in the decision variables dimension, and thus suitable for solving very large scale optimization problems. We present a new derivation and analysis of this algorithm. We show that the MDA can be viewed as a nonlinear projected-subgradient type method, derived from using a general distance-like function instead of the usual Euclidean squared distance. Within this interpretation, we derive in a simple way convergence and efficiency estimates. We then propose an Entropic mirror descent algorithm for convex minimization over the unit simplex, with a global efficiency estimate proven to be mildly dependent in the dimension of the problem.
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
A Low-Power Fast-Transient 90-nm Low-Dropout Regulator With Multiple Small-Gain Stages A power-efficient 90-nm low-dropout regulator (LDO) with multiple small-gain stages is proposed in this paper. The proposed channel-resistance-insensitive small-gain stages provide loop gain enhancements without introducing low-frequency poles before the unity-gain frequency (UGF). As a result, both the loop gain and bandwidth of the LDO are improved, so that the accuracy and response speed of voltage regulation are significantly enhanced. As no on-chip compensation capacitor is required, the active chip area of the LDO is only 72.5 μm × 37.8 μm. Experimental results show that the LDO is capable of providing an output of 0.9 V with maximum output current of 50 mA from a 1-V supply. The LDO has a quiescent current of 9.3 μA, and has significantly improvement in line and load transient responses as well as performance in power-supply rejection ratio (PSRR).
The PARSEC benchmark suite: characterization and architectural implications This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
Fuzzy regulators and fuzzy observers: relaxed stability conditions and LMI-based designs This paper presents new relaxed stability conditions and LMI- (linear matrix inequality) based designs for both continuous and discrete fuzzy control systems. They are applied to design problems of fuzzy regulators and fuzzy observers. First, Takagi and Sugeno's fuzzy models and some stability results are recalled. To design fuzzy regulators and fuzzy observers, nonlinear systems are represented by Takagi-Sugeno's (TS) fuzzy models. The concept of parallel distributed compensation is employed to design fuzzy regulators and fuzzy observers from the TS fuzzy models. New stability conditions are obtained by relaxing the stability conditions derived in previous papers, LMI-based design procedures for fuzzy regulators and fuzzy observers are constructed using the parallel distributed compensation and the relaxed stability conditions. Other LMI's with respect to decay rate and constraints on control input and output are also derived and utilized in the design procedures. Design examples for nonlinear systems demonstrate the utility of the relaxed stability conditions and the LMI-based design procedures
Digital signal processors in cellular radio communications Contemporary wireless communications are based on digital communications technologies. The recent commercial success of mobile cellular communications has been enabled in part by successful designs of digital signal processors with appropriate on-chip memories and specialized accelerators for digital transceiver operations. This article provides an overview of fixed point digital signal processors and ways in which they are used in cellular communications. Directions for future wireless-focused DSP technology developments are discussed
Kinesis: a security incident response and prevention system for wireless sensor networks This paper presents Kinesis, a security incident response and prevention system for wireless sensor networks, designed to keep the network functional despite anomalies or attacks and to recover from attacks without significant interruption. Due to the deployment of sensor networks in various critical infrastructures, the applications often impose stringent requirements on data reliability and service availability. Given the failure- and attack-prone nature of sensor networks, it is a pressing concern to enable the sensor networks provide continuous and unobtrusive services. Kinesis is quick and effective in response to incidents, distributed in nature, and dynamic in selecting response actions based on the context. It is lightweight in terms of response policy specification, and communication and energy overhead. A per-node single timer based distributed strategy to select the most effective response executor in a neighborhood makes the system simple and scalable, while achieving proper load distribution and redundant action optimization. We implement Kinesis in TinyOS and measure its performance for various application and network layer incidents. Extensive TOSSIM simulations and testbed experiments show that Kinesis successfully counteracts anomalies/attacks and behaves consistently under various attack scenarios and rates.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.1
0.1
0.1
0.1
0.1
0.02
0
0
0
0
0
0
0
0
A GaN-Based Wireless Monitoring System for High-Temperature Applications. A fully-integrated data transmission system based on gallium nitride (GaN) high-electron-mobility transistor (HEMT) devices is proposed. This system targets high-temperature (HT) applications, especially those involving pressure and temperature sensors for aerospace in which the environmental temperature exceeds 350 degrees C. The presented system includes a front-end amplifying the sensed signal (gain of 50 V/V), followed by a novel analog-to-digital converter driving a modulator exploiting the load-shift keying technique. An oscillation frequency of 1.5 MHz is used to ensure a robust wireless transmission through metallic-based barriers. To retrieve the data, a new demodulator architecture based on digital circuits is proposed. A 1 V amplitude difference can be detected between a high-amplitude (data-on) and a low-amplitude (data-off) of the received modulated signal. Two high-voltage supply levels (+14 V and -14 V) are required to operate the circuits. The layout of the proposed system was completed in a chip occupying 10.8 mm(2). The HT characterization and modeling of integrated GaN devices and passive components are performed to ensure the reliability of simulation results. The performance of the various proposed building blocks, as well as the whole system, have been validated by simulation over the projected wide operating temperature range (25-350 degrees C).
System-on-Chip: Reuse and Integration Over the past ten years, as integrated circuits became increasingly more complex and expensive, the industry began to embrace new design and reuse methodologies that are collectively referred to as system-on-chip (SoC) design. In this paper, we focus on the reuse and integration issues encountered in this paradigm shift. The reusable components, called intellectual property (IP) blocks or cores, a...
A 300mA 14mV-ripple digitally controlled buck converter using frequency domain ΔΣ ADC and hybrid PWM generator A 0.18 μm CMOS digitally controlled DC-DC buck converter is presented. An all-digital 8 b frequency-domain ΔΣ ADC is used for the feedback path, and a 9 b segmented digital PWM is used for power stage control. A regulated output voltage accuracy of 1% and maximum efficiency of 94% is achieved with less than 14 mVpp ripple and a settling time of 100 μs for a 300 mA load transient.
Efficient Bi-Directional Digital Communication Scheme for Isolated Switch Mode Power Converters. An efficient high-speed bi-directional data transmission scheme for isolated AC-DC and DC-DC switched mode power converters is presented. The bi-directional scheme supports fast, efficient and reliable transmission of digitally encoded data across the isolation barrier and enables primary side control, allowing effective start-up and a simple interface to system controllers. Another key feature is...
A 4-Phase 30–70 MHz Switching Frequency Buck Converter Using a Time-Based Compensator A high switching frequency multi-phase buck converter architecture using a time-based compensator is presented. Efficiency degradation due to mismatch between the phases is mitigated by generating precisely matched duty-cycles by combining a time-based multi-phase generator (MPG) with a time-based PID compensator (T-PID). The proposed approach obviates the need for a complex current sensing and ca...
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
Randomized algorithms This text by two well-known experts in the field presents the basic concepts in the design and analysis of randomized algorithms at a level accessible to beginning graduate students, professionals and researchers.
A Formal Basis for the Heuristic Determination of Minimum Cost Paths Although the problem of determining the minimum cost path through a graph arises naturally in a number of interesting applications, there has been no underlying theory to guide the development of efficient search procedures. Moreover, there is no adequate conceptual framework within which the various ad hoc search strategies proposed to date can be compared. This paper describes how heuristic information from the problem domain can be incorporated into a formal mathematical theory of graph searching and demonstrates an optimality property of a class of search strategies.
Consensus problems in networks of agents with switching topology and time-delays. In this paper, we discuss consensus problems for a network of dynamic agents with flxed and switching topologies. We analyze three cases: i) networks with switching topology and no time-delays, ii) networks with flxed topology and communication time-delays, and iii) max-consensus problems (or leader determination) for groups of discrete-time agents. In each case, we introduce a linear/nonlinear consensus protocol and provide convergence analysis for the proposed distributed algorithm. Moreover, we establish a connection between the Fiedler eigenvalue of the information ∞ow in a network (i.e. algebraic connectivity of the network) and the negotiation speed (or performance) of the corresponding agreement protocol. It turns out that balanced digraphs play an important role in addressing average-consensus problems. We intro- duce disagreement functions that play the role of Lyapunov functions in convergence analysis of consensus protocols. A distinctive feature of this work is to address consen- sus problems for networks with directed information ∞ow. We provide analytical tools that rely on algebraic graph theory, matrix theory, and control theory. Simulations are provided that demonstrate the efiectiveness of our theoretical results.
TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones Today’s smartphone operating systems frequently fail to provide users with visibility into how third-party applications collect and share their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid enables realtime analysis by leveraging Android’s virtualized execution environment. TaintDroid incurs only 32&percnt; performance overhead on a CPU-bound microbenchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, in our 2010 study we found 20 applications potentially misused users’ private information; so did a similar fraction of the tested applications in our 2012 study. Monitoring the flow of privacy-sensitive data with TaintDroid provides valuable input for smartphone users and security service firms seeking to identify misbehaving applications.
Fuzzy regulators and fuzzy observers: relaxed stability conditions and LMI-based designs This paper presents new relaxed stability conditions and LMI- (linear matrix inequality) based designs for both continuous and discrete fuzzy control systems. They are applied to design problems of fuzzy regulators and fuzzy observers. First, Takagi and Sugeno's fuzzy models and some stability results are recalled. To design fuzzy regulators and fuzzy observers, nonlinear systems are represented by Takagi-Sugeno's (TS) fuzzy models. The concept of parallel distributed compensation is employed to design fuzzy regulators and fuzzy observers from the TS fuzzy models. New stability conditions are obtained by relaxing the stability conditions derived in previous papers, LMI-based design procedures for fuzzy regulators and fuzzy observers are constructed using the parallel distributed compensation and the relaxed stability conditions. Other LMI's with respect to decay rate and constraints on control input and output are also derived and utilized in the design procedures. Design examples for nonlinear systems demonstrate the utility of the relaxed stability conditions and the LMI-based design procedures
Highly sensitive Hall magnetic sensor microsystem in CMOS technology A highly sensitive magnetic sensor microsystem based on a Hall device is presented. This microsystem consists of a Hall device improved by an integrated magnetic concentrator and new circuit architecture for the signal processing. It provides an amplification of the sensor signal with a resolution better than 30 /spl mu/V and a periodic offset cancellation while the output of the microsystem is av...
A Highly Adaptive Leader Election Algorithm for Mobile Ad Hoc Networks.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.24
0.24
0.24
0.24
0.12
0
0
0
0
0
0
0
0
0
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
Input-adaptive dual-output power management unit for energy harvesting devices An input-adaptive dual-output charge pump (DOQP) with variable fractional conversion ratio and low dropout regulators (LDRs) in cascade is implemented for the power management unit (PMU) of implantable energy harvesting devices. The charge pump has one step-down and one step-up output adaptively converted from a 1.8 to 4.0V harvested energy source, and the outputs of the LDRs are 1V and 3V respectively. To improve the overall efficiency, conversion ratios of k/6 (k=2,..., 12) are realized by 1/2- and 1/3-capacitors using an interleaving scheme. The PMU is designed using a 0.13μm 3.3V CMOS process, and attains the peak efficiency of 81.3% and efficiency better than 55% for a wide input range.
Analysis and Design Strategy of On-Chip Charge Pumps for Micro-power Energy Harvesting Applications.
20.4 A 123-phase DC-DC converter-ring with fast-DVS for microprocessors Inspired by The Square of Vatican City, a fully integrated step-down switched-capacitor DC-DC converter ring with 100+ phases is designed with a fast dynamic voltage scaling (DVS) feature for the microprocessor in portable or wearable devices. As shown in Fig. 20.4.1, this symmetrical ring-shaped converter surrounds its load in the square and supplies the on-chip power grid, such that a good quality power supply can be easily accessed at any point of the chip edges. There are 30 phases on the top edge and 31 phases on each of the other 3 edges, making 123 phases in total. The phase number and unit cell dimensions of this architecture can easily be adjusted to fit the floor plan of the load. The pads of the converter-ring are placed at the corners, and will not affect the pads of the load. Moreover, by using the proposed VDD-controlled oscillator (VDDCO), the frequency of which is controlled by varying its supply voltage, a hitherto unexplored feature of the multiphase DC-DC architecture is exposed: the control-loop unity gain frequency (UGF) could be designed to be higher than the switching frequency.
A Low-Power Fast-Transient 90-nm Low-Dropout Regulator With Multiple Small-Gain Stages A power-efficient 90-nm low-dropout regulator (LDO) with multiple small-gain stages is proposed in this paper. The proposed channel-resistance-insensitive small-gain stages provide loop gain enhancements without introducing low-frequency poles before the unity-gain frequency (UGF). As a result, both the loop gain and bandwidth of the LDO are improved, so that the accuracy and response speed of voltage regulation are significantly enhanced. As no on-chip compensation capacitor is required, the active chip area of the LDO is only 72.5 μm × 37.8 μm. Experimental results show that the LDO is capable of providing an output of 0.9 V with maximum output current of 50 mA from a 1-V supply. The LDO has a quiescent current of 9.3 μA, and has significantly improvement in line and load transient responses as well as performance in power-supply rejection ratio (PSRR).
Fully Integrated Capacitive DC–DC Converter With All-Digital Ripple Mitigation Technique This paper presents an adaptive all-digital ripple mitigation technique for fully integrated capacitive dc-dc converters. Ripple control is achieved using a two-pronged approach where coarse ripple control is achieved by varying the size of the bucket capacitance, and fine control is achieved by charge/discharge time modulation of the bucket capacitors used to transfer the charge between the input and output, both of which are completely digital techniques. A dual-loop control was used to achieve regulation and ripple control. The primary single-bound hysteretic control loop achieves voltage regulation and the secondary loop is responsible for ripple control. The dual-loop control modulates the charge/discharge pulse width in a hysteretic variable-frequency environment using a simple digital pulse width modulator. The fully integrated converter was implemented in IBM's 130-nm CMOS process. Ripple reduces from 98 to 30 mV, when ripple control secondary loop is enabled for a load of 0.3 V and 4 mA without significantly impacting the converter's core efficiency. Measurement results show constant ripple, independent of output voltage. The converter achieves a maximum efficiency of 70% for Vin= 1.3 V and Vout= 0.5 V and a maximum power density of 24.5 mW/mm2, including the areas for the decoupling capacitor. The maximum power density increases to 68 mW/mm2 if the decoupling capacitor is assumed to be already present as part of the digital design.
A Sizing Methodology for On-Chip Switched-Capacitor DC/DC Converters This paper proposes a systematic sizing methodology for switched-capacitor DC/DC converters aimed at maximizing the converter efficiency under the die area constraint. To do so, we propose first an analytical solution of the optimum switching frequency to maximize the converter efficiency. When the parasitic capacitances are low, this solution leads to an identical contribution of the switches and transfer capacitors to the converter output impedance. As the parasitic capacitances increase, the optimum switching frequency decreases. Secondly, optimum capacitor and switch sizes for maximum efficiency are provided. We show that the overdrive voltage strongly impacts the optimum switch width through the modification of their conductance. To support the sizing methodology, a model of the efficiency of switched-capacitor DC/DC converters is developed. It is validated against simulation and measurement results in 65 nm and 0.13 μm CMOS, respectively. The proposed sizing methodology shows how the converter efficiency can be traded-off for die area reduction and what is the impact of parasitic capacitances on the converter sizing.
Fully integrated wideband high-current rectifiers for inductively powered devices This paper describes the design and implementation of fully integrated rectifiers in BiCMOS and standard CMOS technologies for rectifying an externally generated RF carrier signal in inductively powered wireless devices, such as biomedical implants, radio-frequency identification (RFID) tags, and smartcards to generate an on-chip dc supply. Various full-wave rectifier topologies and low-power circuit design techniques are employed to decrease substrate leakage current and parasitic components, reduce the possibility of latch-up, and improve power transmission efficiency and high-frequency performance of the rectifier block. These circuits are used in wireless neural stimulating microsystems, fabricated in two processes: the University of Michigan's 3-μm 1M/2P N-epi BiCMOS, and the AMI 1.5-μm 2M/2P N-well standard CMOS. The rectifier areas are 0.12-0.48 mm2 in the above processes and they are capable of delivering >25mW from a receiver coil to the implant circuitry. The performance of these integrated rectifiers has been tested and compared, using carrier signals in 0.1-10-MHz range.
Differential Power Analysis . Cryptosystem designers frequently assume that secrets willbe manipulated in closed, reliable computing environments. Unfortunately,actual computers and microchips leak information about the operationsthey process. This paper examines specific methods for analyzingpower consumption measurements to find secret keys from tamperresistant devices. We also discuss approaches for building cryptosystemsthat can operate securely in existing hardware that leaks information.Keywords:...
Synopsis diffusion for robust aggregation in sensor networks Abstract Aggregating sensor readings within the network is an essen - tial technique for conserving energy in sensor networks Pre - vious work proposes aggregating along a tree overlay topol - ogy in order to conserve energy However, a tree overlay is very fragile, and the high rate of node and link failures in sensor networks often results in a large fraction of readings being unaccounted for in the aggregate Value splitting on multi - path overlays, as proposed in TAG, reduces the vari - ance in the error, but still results in signi cant errors Pre - vious approaches are fragile, fundamentally, because they tightly couple aggregate computation and message routing In this paper, we propose a family of aggregation techniques, called synopsis diffusion , that decouples the two, enabling aggregation algorithms and message routing to be optimized independently As a result, the level of redundancy in mes - sage routing (as a trade - off with energy consumption) can be adapted to both expected and encountered network condi - tions We present a number of concrete examples of synopsis diffusion algorithms, including a broadcast - based instantia - tion of synopsis diffusion that is as energy ef cient as a tree, but dramatically more robust
A g/sub m//I/sub D/ based methodology for the design of CMOS analog circuits and its application to the synthesis of a silicon-on-insulator micropower OTA A new design methodology based on a unified treatment of all the regions of operation of the MOS transistor is proposed. It is intended for the design of CMOS analog circuits and especially suited for low power circuits where the moderate inversion region often is used because it provides a good compromise between speed and power consumption. The synthesis procedure is based on the relation betwee...
Reputation management in collaborative computing systems In collaborative systems, a set of organizations shares their computing resources, such as compute cycles, storage space or on-line services, in order to establish Virtual Organizations (VOs) aimed at achieving common tasks. The formation and operation of Virtual Organizations involve establishing trust among their members and reputation is one measure by which such trust can be quantified and reasoned about. In this paper, we contribute to research in the area of trust for collaborative computing systems along two directions: first, we provide a survey on the main reputation-based systems that fulfil the trust requirements for collaborative systems, including reputation systems designed for e-commerce, agent-based environments, and Peer-to-Peer computing and Grid-based systems. Second, we present a model for reputation management for Grid Virtual Organizations that is based on utility computing and that can be used to rate users according to their resource usage and resources and their providers according to the quality of service they deliver. We also demonstrate, through Grid simulations, how the model can be used in improving completion and welfare in Virtual Organizations. Copyright (c) 2009 John Wiley & Sons, Ltd.
A 15.5 dB, wide signal swing, dynamic amplifier using a common-mode voltage detection technique This paper presents a high-speed, low-power and wide signal swing differential dynamic amplifier using a common-mode voltage detection technique. The proposed dynamic amplifier achieves a 15.5 dB gain with less than 1 dB drop over a signal swing of 1.3 Vpp at an operating frequency of 1.5 GHz with a VDD of 1.2 V in 90 nm CMOS. The power consumption of the proposed circuit can be reduced linearly with operating frequency lowering.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.056521
0.053187
0.026793
0.025298
0.018062
0.013374
0.001871
0.000019
0
0
0
0
0
0
Low-power programmable charge-domain sampler with embedded N-path bandpass filter for software-defined radio This paper proposes a charge-domain quadrature down-conversion sampling mixer with improved filter functionality. An 4-path bandpass filter and a quadrature sampling mixer are integrated in a cascode architecture to minimize power consumption while providing a degree of programmability. The proposed design is applicable to heterodyne receivers for suppressing aliasing signals, large out-of-band blockers, and IF images. It also offers partial channel selection. Designed in IBM 130 nm 1.2V CMOS technology, simulation results from Spectre of Cadence Design Systems with BSIM4 device models demonstrate that the proposed design exhibits aliasing rejection of 70 dB, stop band attenuation of 60 dB while consuming current of 104μ A.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Breath activity monitoring with wearable UWB radars: measurement and analysis of the pulses reflected by the human body. Objective: Measurements of ultrawideband (UWB) pulses reflected by the human body are conducted to evidence the differences in the received signal time behaviors due to respiration phases, and to experimentally verify previously obtained numerical results on the body&#39;s organs responsible for pulse reflection. Methods: Two experimental setups are used. The first one is based on a commercially avail...
On the spectral and power requirements for ultra-wideband transmission UWB systems based on impulse radio have the potential to provide very high data rates over short distances. In this paper, a new pulse shape is presented that satisfies the FCC spectral mask. Using this pulse, the link budget is calculated to quantify the relationship between data rate and distance. It is shown that UWB can be a good candidate for reliably transmitting 100 Mbps over distances at about 10 meters.
A UWB Impulse-Radio Timed-Array Radar With Time-Shifted Direct-Sampling Architecture in 0.18- CMOS This paper presents a ultra-wideband (UWB) impulse radio timed-array radar utilizing time-shifted direct-sampling architecture. Time shift between the sampling time of the transmitter and the receiver determines the time of arrival (TOA), and a four-element timed antenna array enables beamforming. The different time shifts among the channels at the receiver determine the object's direction of arrival (DOA). Transmitter channels have different shifts, as well, to enhance spatial selectivity. The direct-sampling receiver reconstructs the scattered waveform in the digital domain, which provides full freedom to the backend digital signal processing. The on-chip digital-to-time converter (DTC) provides all the necessary timing with a fine resolution and wide range. The proposed architecture has a range and azimuth resolution of 0.75 cm and 3 degrees, respectively. The transmitter is capable of synthesizing a variety of pulses within 800 ps at a sampling rate of 10 GS/s. The receiver has an equivalent sampling frequency of 20 GS/s while supporting the RF bandwidth from 2 to 4 GHz. The proposed designs were fabricated in a 0.18- μm standard CMOS technology with a die size of 5.4×3.3 mm2 and 5.4×5.8 mm2 for the transmitter and the receiver, respectively.
A $4\times4$ IR UWB Timed-Array Radar Based on 16-Channel Transmitter and Sampling Capacitor Reused Receiver A 4 × 4 impulse radio (IR) ultra-wide band (UWB) timed-array radar is proposed in this brief based on 16-channel all-digital transmitter and sampling capacitor reused receiver. 3-D beamforming is achieved by the 16-channel (4 × 4) planar low power IR UWB beamforming transmitter. UWB receiver adopts energy detection and reuses the integrating capacitor as C-DAC within an 8-bit SAR ADC to save area ...
A Continuous Sweep-Clock-Based Time-Expansion Impulse-Radio Radar. This paper presents a single-chip impulse-radio (IR) radar transceiver that utilizes a novel continuous sweep-clock generator. While requiring low power and small area, the proposed clock generator enables a versatile IR radar operation with millimeter resolution. The radar detection range and update rate are adjustable by an on-chip delay command circuit or by an external master. The IR radar tra...
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph (Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445-452; Hen- drickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993). From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to con- sistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan-Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear pro- gramming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.
Controllability and observability of Boolean control networks The controllability and observability of Boolean control networks are investigated. After a brief review on converting a logic dynamics to a discrete-time linear dynamics with a transition matrix, some formulas are obtained for retrieving network and its logical dynamic equations from this network transition matrix. Based on the discrete-time dynamics, the controllability via two kinds of inputs is revealed by providing the corresponding reachable sets precisely. Then the problem of observability is also solved by giving necessary and sufficient conditions.
From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render驴or synthesize驴images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses $\times$ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
A world survey of artificial brain projects, Part I: Large-scale brain simulations Driven by rapid ongoing advances in computer hardware, neuroscience and computer science, Artificial Brain research and development are blossoming. This article constitutes the first half of a two-part world survey of artificial brain projects: this part dealing with large-scale brain simulations, and the second part with biologically inspired cognitive architectures (BICAs). The large-scale brain simulations we consider in depth here include those by Markram, Modha, Boahen, Horwitz, Edelman, Izhikevich, and Just. As well as reviewing the particulars of these simulation projects, we position them in a broader perspective, comparing at the different underlying definitions of the concept of ''simulation,'' noting that in many ways the projects are modeling neurosystems at different levels as well as using different methodologies.
MicroGP—An Evolutionary Assembly Program Generator This paper describes 驴GP, an evolutionary approach for generating assembly programs tuned for a specific microprocessor. The approach is based on three clearly separated blocks: an evolutionary core, an instruction library and an external evaluator. The evolutionary core conducts adaptive population-based search. The instruction library is used to map individuals to valid assembly language programs. The external evaluator simulates the assembly program, providing the necessary feedback to the evolutionary core. 驴GP has some distinctive features that allow its use in specific contexts. This paper focuses on one such context: test program generation for design validation of microprocessors. Reported results show 驴GP being used to validate a complex 5-stage pipelined microprocessor. Its induced test programs outperform an exhaustive functional test and an instruction randomizer, showing that engineers are able to automatically obtain high-quality test programs.
Practical Timing Side Channel Attacks against Kernel Space ASLR Due to the prevalence of control-flow hijacking attacks, a wide variety of defense methods to protect both user space and kernel space code have been developed in the past years. A few examples that have received widespread adoption include stack canaries, non-executable memory, and Address Space Layout Randomization (ASLR). When implemented correctly (i.e., a given system fully supports these protection methods and no information leak exists), the attack surface is significantly reduced and typical exploitation strategies are severely thwarted. All modern desktop and server operating systems support these techniques and ASLR has also been added to different mobile operating systems recently. In this paper, we study the limitations of kernel space ASLR against a local attacker with restricted privileges. We show that an adversary can implement a generic side channel attack against the memory management system to deduce information about the privileged address space layout. Our approach is based on the intrinsic property that the different caches are shared resources on computer systems. We introduce three implementations of our methodology and show that our attacks are feasible on four different x86-based CPUs (both 32- and 64-bit architectures) and also applicable to virtual machines. As a result, we can successfully circumvent kernel space ASLR on current operating systems. Furthermore, we also discuss mitigation strategies against our attacks, and propose and implement a defense solution with negligible performance overhead.
ΣΔ ADC with fractional sample rate conversion for software defined radio receiver.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
0
Dynamic Event-Triggered and Self-Triggered Control for Multi-Agent Systems We propose two novel dynamic event-triggered control laws to solve the average consensus problem for firstorder continuous-time multi-agent systems over undirected graphs. Compared with most existing triggering laws, the proposed laws involve internal dynamic variables, which play an essential role in guaranteeing that the triggering time sequence does not exhibit Zeno behavior. Moreover, some existing triggering laws are special cases of ours. For the proposed selftriggered algorithm, continuous agent listening is avoided as each agent predicts its next triggering time and broadcasts it to its neighbors at the current triggering time. Thus, each agent only needs to sense and broadcast at its triggering times, and to listen to and receive incoming information from its neighbors at their triggering times. It is proved that the proposed triggering laws make the state of each agent converge exponentially to the average of the agentsu0027 initial states if and only if the underlying graph is connected. Numerical simulations are provided to illustrate the effectiveness of the theoretical results.
Fixed-Time Average Consensus of Nonlinear Delayed MASs Under Switching Topologies: An Event-Based Triggering Approach This article addresses the fixed-time average consensus problem of nonlinear multiagent systems (MASs) subject to input delay, external disturbances, and switching topologies. Different from the finite-time convergence, the convergence time of the fixed-time convergence is independent of initial conditions. Then, an event-based control strategy is presented to reach the fixed-time average consensus under switching topologies and intermittent communication. Because the nonlinear dynamics, external disturbances, switching topologies, and triggering condition for intermittent communication are considered, the fixed-time consensus problem is more challenging under the event-based control than under the continuous-time control. Besides, a new measurement error is designed based on the hyperbolic tangent function to avoid Zeno behavior. Furthermore, an improved triggering function is designed to avoid continuous monitoring. Hence, resource consumption is reduced significantly. Finally, the effectiveness of the algorithms is validated by three simulation examples.
Perception-Based Data Reduction and Transmission of Haptic Data in Telepresence and Teleaction Systems We present a novel approach for the transmission of haptic data in telepresence and teleaction systems. The goal of this work is to reduce the packet rate between an operator and a teleoperator without impairing the immersiveness of the system. Our approach exploits the properties of human haptic perception and is, more specifically, based on the concept of just noticeable differences. In our scheme, updates of the haptic amplitude values are signaled across the network only if the change of a haptic stimulus is detectable by the human operator. We investigate haptic data communication for a 1 degree-of-freedom (DoF) and a 3 DoF teleaction system. Our experimental results show that the presented approach is able to reduce the packet rate between the operator and teleoperator by up to 90% of the original rate without affecting the performance of the system.
The Tactile Internet: Applications and Challenges Wireless communications today enables us to connect devices and people for an unprecedented exchange of multimedia and data content. The data rates of wireless communications continue to increase, mainly driven by innovation in electronics. Once the latency of communication systems becomes low enough to enable a round-trip delay from terminals through the network back to terminals of approximately 1 ms, an overlooked breakthrough?human tactile to visual feedback control?will change how humans communicate around the world. Using these controls, wireless communications can be the platform for enabling the control and direction of real and virtual objects in many situations of our life. Almost no area of the economy will be left untouched, as this new technology will change health care, mobility, education, manufacturing, smart grids, and much more. The Tactile Internet will become a driver for economic growth and innovation and will help bring a new level of sophistication to societies.
Fully Distributed Synchronization of Dynamic Networked Systems With Adaptive Nonlinear Couplings. In this article, we consider the distributed synchronization problem of dynamic networked systems with adaptive nonlinear couplings. Based on how the information is collected, the interactions between subsystems are characterized by nonlinear relative state couplings and nonlinear absolute state couplings. In both cases, we show that the considered nonlinear interactions can be used to simulate th...
Deadband Feedback-Based Scheduling Approach For Networked Control System With Variable Sampling Period Reasonable information scheduling strategies in the networked control system (NCS) can improve the quality of service of the network, reduce the conflict of information transmission in the network, and improve the overall performance of the NCS. In order to improve the performance of the NCS, a deadband feedback-based scheduling approach for the NCS with a variable sampling period is proposed. For the NCS with multi control loops, considering the limitation of network bandwidth resources, the dynamic real-time adjustment of a multi-loop sampling period is achieved through network utilization prediction, network bandwidth configuration and sampling period calculation. Furthermore, deadband feedback scheduling is combined with a variable sampling period algorithm. Deadband is set in the sensor and controller nodes to effectively adjust the information flow of the forward channel and the feedback channel. The proposed scheduling approach can reduce the impact of network conflict and network delay on system stability, make the network resources allocated reasonably, save network data traffic, and improve the overall performance of the NCS. A NCS with five control loops is used as the simulation object and carried out by True Time toolbox. The simulation results show that the proposed scheduling approach can improve output control performance of the system, reduce integral absolute error value of the control loops, and improve network utilization. The overall control performance of the system is improved.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
GPUWattch: enabling energy optimizations in GPGPUs General-purpose GPUs (GPGPUs) are becoming prevalent in mainstream computing, and performance per watt has emerged as a more crucial evaluation metric than peak performance. As such, GPU architects require robust tools that will enable them to quickly explore new ways to optimize GPGPUs for energy efficiency. We propose a new GPGPU power model that is configurable, capable of cycle-level calculations, and carefully validated against real hardware measurements. To achieve configurability, we use a bottom-up methodology and abstract parameters from the microarchitectural components as the model's inputs. We developed a rigorous suite of 80 microbenchmarks that we use to bound any modeling uncertainties and inaccuracies. The power model is comprehensively validated against measurements of two commercially available GPUs, and the measured error is within 9.9% and 13.4% for the two target GPUs (GTX 480 and Quadro FX5600). The model also accurately tracks the power consumption trend over time. We integrated the power model with the cycle-level simulator GPGPU-Sim and demonstrate the energy savings by utilizing dynamic voltage and frequency scaling (DVFS) and clock gating. Traditional DVFS reduces GPU energy consumption by 14.4% by leveraging within-kernel runtime variations. More finer-grained SM cluster-level DVFS improves the energy savings from 6.6% to 13.6% for those benchmarks that show clustered execution behavior. We also show that clock gating inactive lanes during divergence reduces dynamic power by 11.2%.
Self-stabilizing systems in spite of distributed control The synchronization task between loosely coupled cyclic sequential processes (as can be distinguished in, for instance, operating systems) can be viewed as keeping the relation “the system is in a legitimate state” invariant. As a result, each individual process step that could possibly cause violation of that relation has to be preceded by a test deciding whether the process in question is allowed to proceed or has to be delayed. The resulting design is readily—and quite systematically—implemented if the different processes can be granted mutually exclusive access to a common store in which “the current system state” is recorded.
Adaptive Synchronization of an Uncertain Complex Dynamical Network This brief paper further investigates the locally and globally adaptive synchronization of an uncertain complex dynamical network. Several network synchronization criteria are deduced. Especially, our hypotheses and designed adaptive controllers for network synchronization are rather simple in form. It is very useful for future practical engineering design. Moreover, numerical simulations are also given to show the effectiveness of our synchronization approaches.
TCP/IP Timing Channels: Theory to Implementation Abstract—There has been significant recent interest in covert communication using timing channels. In network timing chan- nels, information is leaked by controlling the time between trans- missions of consecutive packets. Our work focuses on network timing channels and provides two main contributions. The first is to quantify the threat posed by covert network timing channels. The other is to use timing channels to communicate at a low data rate without being detected. In this paper, we design and implement a covert TCP/IP timing channel. We are able to quantify the achievable data rate (or leak rate) of such a covert channel. Moreover, we show that by sacrificing data rate, the traffic patterns of the covert timing channel can be made computationally indistinguishable from that of normal traffic, which makes detecting such communication virtually impossible. We demonstrate the efficacy of our solution by showing significant performance gains in terms of both data rate and covertness over the state-of-the-art.
A decentralized modular control framework for robust control of FES-activated walker-assisted paraplegic walking using terminal sliding mode and fuzzy logic control. A major challenge to developing functional electrical stimulation (FES) systems for paraplegic walking and widespread acceptance of these systems is the design of a robust control strategy that provides satisfactory tracking performance. The systems need to be robust against time-varying properties of neuromusculoskeletal dynamics, day-to-day variations, subject-to-subject variations, external dis...
OpenIoT: An open service framework for the Internet of Things The Internet of Things (IoT) has been a hot topic for the future of computing and communication. It will not only have a broad impact on our everyday life in the near future, but also create a new ecosystem involving a wide array of players such as device developers, service providers, software developers, network operators, and service users. In this paper, we present an open service framework for the Internet of Things, facilitating entrance into the IoT-related mass market, and establishing a global IoT ecosystem with the worldwide use of IoT devices and softwares. We expect that the open IoT service framework we proposed will play an important role in the widespread adoption of the Internet of Things in our everyday life, enhancing our quality of life with a large number of innovative applications and services, but also offering endless opportunities to all of the stakeholders in the world of information and communication technologies.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.1
0.1
0.1
0.1
0.1
0.1
0
0
0
0
0
0
0
0
A CMOS monolithic ΔΣ-controlled fractional-N frequency synthesizer for DCS-1800 A monolithic 1.8-GHz ΔΣ-controlled fractional-N phase-locked loop (PLL) frequency synthesizer is implemented in a standard 0.25-μm CMOS technology. The monolithic fourth-order type-II PLL integrates the digital synthesizer part together with a fully integrated LC VCO, a high-speed prescaler, and a 35-kHz dual-path loop filter on a die of only 2×2 mm2. To investigate the influence of the ΔΣ modulator on the synthesizer's spectral purity, a fast nonlinear analysis method is developed and experimentally verified. Nonlinear mixing in the phase-frequency detector (PFD) is identified as the main source of spectral pollution in ΔΣ fractional-N synthesizers. The design of the zero-dead zone PFD and the dual charge pump is optimized toward linearity and spurious suppression. The frequency synthesizer consumes 35 mA from a single 2-V power supply. The measured phase noise is as low as -120 dBc/Hz at 600 kHz and -139 dBc/Hz at 3 MHz. The measured fractional spur level is less than -100 dBc, even for fractional frequencies close to integer multiples of the reference frequency, thereby satisfying the DCS-1800 spectral purity constraints
A 1.1-GHz CMOS fractional-N frequency synthesizer with a 3-b third-order ΔΣ modulator A 1.1-GHz fractional-N frequency synthesizer is implemented in 0.5-/spl mu/m CMOS employing a 3-b third-order /spl Delta//spl Sigma/ modulator. The in-band phase noise of -92 dBc/Hz at 10-kHz offset with a spur of less than -95 dBc is measured at 900.03 MHz with a phase detector frequency of 7.994 MHz and a loop bandwidth of 40 kHz. Having less than 1-Hz frequency resolution and agile switching sp...
Second and third-order successive requantizers for spurious tone reduction in low-noise fractional-N PLLs This paper presents 2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">nd</sup> - and 3 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">rd</sup> -order digital requantizers which can be used as drop-in replacements for digital delta-sigma modulators in analog fractional-N PLLs to reduce fractional spurs. The requantizers are demonstrated and compared to conventional delta-sigma modulators in a low-noise 3.35 GHz PLL IC and shown to offer significant reductions in worst-case spurious tones with similar phase noise relative to their deltasigma modulator counterparts.
A 1.1-GHz CMOS fractional-N frequency synthesizer with a 3-b third-order /spl Delta//spl Sigma/ modulator A 1.1-GHz fractional-N frequency synthesizer is implemented in 0.5-/spl mu/m CMOS employing a 3-b third-order /spl Delta//spl Sigma/ modulator. The in-band phase noise of -92 dBc/Hz at 10-kHz offset with a spur of less than -95 dBc is measured at 900.03 MHz with a phase detector frequency of 7.994 MHz and a loop bandwidth of 40 kHz. Having less than 1-Hz frequency resolution and agile switching sp...
Second and Third-Order Noise Shaping Digital Quantizers for Low Phase Noise and Nonlinearity-Induced Spurious Tones in Fractional-N PLLs. Noise shaping digital quantizers, most commonly digital delta-sigma (ΔΣ) modulators, are used in fractional-N phase-locked loops (PLLs) to enable fractional frequency tuning. Unfortunately, their quantization noise is subjected to nonlinear distortion because of the PLL&#39;s inevitable non-ideal analog circuit behavior, which induces spurious tones in the PLL&#39;s phase error. Successive requantizers ha...
Rigorous analysis of delta-sigma modulators for fractional-N PLL frequency synthesis In this paper, rigorous analyses are presented for higher order multistage noise shaping (MASH) Delta-Sigma (/spl Delta//spl Sigma/) modulators, which are built out of cascaded first-order stages, with rational DC inputs and nonzero initial conditions. Asymptotic statistics such as the mean, average power, and autocorrelation of the binary quantizer error are formulated using a nonlinear differenc...
A 700-kHz bandwidth ΣΔ fractional synthesizer with spurs compensation and linearization techniques for WCDMA applications A ΣΔ fractional-N frequency synthesizer targeting WCDMA receiver specifications is presented. Through spurs compensation and linearization techniques, the PLL bandwidth is significantly extended with only a slight increase in the integrated phase noise. In a 0.18-μm standard digital CMOS technology a fully integrated prototype with 2.1-GHz output frequency and 35 Hz resolution has an area of 3.4 mm2 PADs included, and it consumes 28 mW. With a 3-dB closed-loop bandwidth of 700 kHz, the settling time is only 7 μs. The integrated phase noise plus spurs is -45 dBc for the first WCDMA channel (1 kHz to 1.94 MHz) and -65 dBc for the second channel (2.5 to 6.34 MHz) with a worst case in-band (unfiltered) fractional spur of -60 dBc. Given the extremely large bandwidth, the synthesizer could be used also for TX direct modulation over a broad band. The choice of such a large bandwidth, however, still limits the spur performance. A slightly smaller bandwidth would fulfill WCDMA requirements. This has been shown in a second prototype, using the same architecture but employing an external loop filter and VCO for greater flexibility and ease of testing.
Full Four-Channel 6.3-Gb/s 60-GHz CMOS Transceiver With Low-Power Analog and Digital Baseband Circuitry This paper presents a 60-GHz direct-conversion RF front-end and baseband transceiver including analog and digital circuitry for PHY functions. The 65-nm CMOS front-end consumes 319 and 223 mW in transmitting and receiving mode, respectively. It is capable of more than 7-Gb/s 16QAM wireless communication for every channel of the 60-GHz standards, which can be extended up to 10 Gb/s. The 40-nm CMOS baseband including analog, digital, and I/O consumes 196 and 427 mW for 16QAM in transmitting and receiving modes, respectively. In the analog baseband, a 5-b 2304-MS/s ADC consumes 12 mW, and a 6-b 3456-MS/s DAC consumes 11 mW. In the digital baseband integrating all PHY functions, a (1440, 1344) LDPC decoder consumes 74 mW with the low energy efficiency of 11.8 pJ/b. The entire system including both RF and BB using a 6-dBi antenna built in the organic package can transmit 3.1 Gb/s over 1.8 m in QPSK and 6.3 Gb/s over 0.05 m in 16QAM.
A Wideband Inductorless LNA With Local Feedback and Noise Cancelling for Low-Power Low-Voltage Applications A wideband noise-cancelling low-noise amplifier (LNA) without the use of inductors is designed for low-voltage and low-power applications. Based on the common-gate-common-source (CG-CS) topology, a new approach employing local negative feedback is introduced between the parallel CG and CS stages. The moderate gain at the source of the cascode transistor in the CS stage is utilized to boost the transconductance of the CG transistor. This leads to an LNA with higher gain and lower noise figure (NF) compared with the conventional CG-CS LNA, particularly under low power and voltage constraints. By adjusting the local open-loop gain, the NF can be optimized by distributing the power consumption among transistors and resistors based on their contribution to the NF. The optimal value of the local open-loop gain can be obtained by taking into account the effect of phase shift at high frequency. The linearity is improved by employing two types of distortion-cancelling techniques. Fabricated in a 0.13-μm RF CMOS process, the LNA achieves a voltage gain of 19 dB and an NF of 2.8-3.4 dB over a 3-dB bandwidth of 0.2-3.8 GHz. It consumes 5.7 mA from a 1-V supply and occupies an active area of only 0.025 mm2.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Architecture Design of Reconfigurable Pipelined Datapaths This paper examines reconfigurable pipelined datapaths (RaPiDs), a new architecture style for computation-intensive applications that bridges the cost/performance gap between general purpose and application specific architectures. RaPiDs can provide significantly higher performance than general purpose processors on a wide range of applications from the areas of video and signal processing, scientific computing, and communications. Moreover, RaPiDs provide the flexibility that doesn't come with application-specific architectures.A RaPiD architecture is optimized for highly repetitive, computationally-intensive tasks. Very deep application-specific computation pipelines can be configured that deliver very high performance for a wide range of applications. RaPiDs achieve this using a coarse-grained reconfigurable architecture that mixes the appropriate amount of static configuration with dynamic control.We describe the fundamental features of a RaPiD architecture, including the linear array of functional units, a programmable segmented bus structure, and a programmable control architecture. In addition, we outline the floorplan of the architecture and provide timing data for the most critical paths. We conclude with performance numbers for several applications on an instance of a RaPiD architecture.
A monolithic buck DC-DC converter with on-chip PWM circuit A monolithic CMOS voltage-mode, buck DC-DC converter with integrated power switches and new on-chip pulse-width modulation (PWM) technique of switching control is presented in this paper. The PWM scheme is constructed by a CMOS ring oscillator, which duty is compensated by a pseudo hyperbola curve current generator to achieve almost constant frequency operation. The minimum operating voltage of this voltage-mode buck DC-DC converter is 1.2V. The proposed buck DC-DC converter with a chip area of 0.82mm^2 is fabricated with a standard 0.35-@mm CMOS process. The experimental results show that the converter is well regulated over an output range from 0.3 to 1.2V, with an input voltage of 1.5V. The maximum efficiency of the converter is 88%, and its efficiency is kept above 80% over an output power ranging from 30 to 300mW.
CCFI: Cryptographically Enforced Control Flow Integrity Control flow integrity (CFI) restricts jumps and branches within a program to prevent attackers from executing arbitrary code in vulnerable programs. However, traditional CFI still offers attackers too much freedom to chose between valid jump targets, as seen in recent attacks. We present a new approach to CFI based on cryptographic message authentication codes (MACs). Our approach, called cryptographic CFI (CCFI), uses MACs to protect control flow elements such as return addresses, function pointers, and vtable pointers. Through dynamic checks, CCFI enables much finer-grained classification of sensitive pointers than previous approaches, thwarting all known attacks and resisting even attackers with arbitrary access to program memory. We implemented CCFI in Clang/LLVM, taking advantage of recently available cryptographic CPU instructions (AES-NI). We evaluate our system on several large software packages (including nginx, Apache and memcache) as well as all their dependencies. The cost of protection ranges from a 3--18% decrease in server request rate. We also expect this overhead to shrink as Intel improves the performance AES-NI.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.016796
0.014738
0.013333
0.011313
0.00836
0.006138
0.003333
0.000022
0.000001
0
0
0
0
0
Predictor-Based Control Of Linear Systems With Large And Variable Measurement Delays This paper concerns the problem of the control of linear systems by means of feedback from delayed output, where the delay is known and time-varying. The main advantage of the approach is that it can be applied to systems with any delay bound, i.e. not only small delays. The predictor is based on a combination of finite-dimensional elementary predictors whose number can be suitably chosen to compensate any delay. The single-predictor element is an original proposal, and the class of delays to which the schema can be applied includes, but it is not limited to, continuous delay functions.
Robust compensation of a chattering time-varying input delay We investigate the design of a prediction-based controller for a linear system subject to a time-varying input delay, not necessarily causal. This means that the information feeding the system can be older than ones previously received. We propose to use the current delay value in the prediction employed in the control law. Modeling the input delay as a transport Partial Differential Equation, we prove asymptotic tracking of the system state, providing that the average ℒ2-norm of the delay time-derivative is sufficiently small. This result is obtained by generalizing Halanay inequality to time-varying differential inequalities.
A Chain Observer for Nonlinear Systems with Multiple Time-Varying Measurement Delays. This paper presents a method for designing state observers with exponential error decay for nonlinear systems whose output measurements are affected by known time-varying delays. A modular approach is followed, where subobservers are connected in cascade to achieve a desired exponential convergence rate (chain observer). When the delay is small, a single-step observer is sufficient to carry out the goal. Two or more subobservers are needed in the the presence of large delays. The observer employs delay-dependent time-varying gains to achieve the desired exponential error decay. The proposed approach allows to deal with vector output measurements, where each output component can be affected by a different delay. Relationships among the error decay rate, the bound on the measurement delays, the observer gains, and the Lipschitz constants of the system are presented. The method is illustrated on the synchronization problem of continuous-time hyperchaotic systems with buffered measurements.
Isometric Torque Control for Neuromuscular Electrical Stimulation With Time-Varying Input Delay. Previous results have shown experimental evidence that the muscle response to neuromuscular electrical stimulation (NMES) is delayed; the time lag is often referred to as electromechanical delay. NMES closed-loop control methods have been developed to compensate for a known constant input delay. However, as a muscle fatigues, this delay increases. This paper develops a feedback controller that robustly compensates for the time-varying delay of an uncertain muscle model during isometric contractions. The controller is proven to yield global uniformly ultimately bounded torque tracking error. Experimental results illustrate the effectiveness of the developed controller and the time-varying nature of the delayed response.
Pseudo-predictor feedback stabilization of linear systems with time-varying input delays This paper is concerned with stabilization of (time-varying) linear systems with a single time-varying input delay by using the predictor based delay compensation approach. Differently from the traditional predictor feedback which uses the open-loop system dynamics to predict the future state and will result in an infinite dimensional controller, we propose in this paper a pseudo-predictor feedback (PPF) approach which uses the (artificial) closed-loop system dynamics to predict the future state and the resulting controller is finite dimensional and is thus easy to implement. Necessary and sufficient conditions guaranteeing the stability of the closed-loop system under the PPF are obtained in terms of the stability of a class of integral delay operators (systems). Moreover, it is shown that the PPF can compensate arbitrarily large yet bounded input delays provided the open-loop (time-varying linear) system is only polynomially unstable and the feedback gain is well designed. Comparison of the proposed PPF approach with the existing results are well explored. Numerical examples demonstrate the effectiveness of the proposed approaches.
Exponential Stability of Nonlinear Time-Varying Differential Equations and Partial Averaging .   In this paper we formulate, within the Liapunov framework, a sufficient condition for exponential stability of a differential equation. This condition gives rise to a new averaging result referred to as “partial averaging”: exponential stability of a system , with α sufficiently large, is implied by exponential stability of a time-varying system .
Input Delay Compensation for Forward Complete and Strict-Feedforward Nonlinear Systems We present an approach for compensating input delay of arbitrary length in nonlinear control systems. This approach, which due to the infinite dimensionality of the actuator dynamics and due to the nonlinear character of the plant results in a nonlinear feedback operator, is essentially a nonlinear version of the Smith predictor and its various predictor-based modifications for linear plants. Global stabilization in the presence of arbitrarily long delay is achieved for all nonlinear plants that are globally stabilizable in the absence of delay and that satisfy the property of forward completeness (which is satisfied by most mechanical systems, electromechanical systems, vehicles, and other physical systems). For strict-feedforward systems, one obtains the predictor-based feedback law explicitly. For the linearizable subclass of strict-feedforward systems, closed-loop solutions are also obtained explicitly. The feedback designs are illustrated through two detailed examples.
Robust adaptive boundary control of a flexible marine riser with vessel dynamics In this paper, robust adaptive boundary control for a flexible marine riser with vessel dynamics is developed to suppress the riser's vibration. To provide an accurate and concise representation of the riser's dynamic behavior, the flexible marine riser with vessel dynamics is described by a distributed parameter system with a partial differential equation (PDE) and four ordinary differential equations (ODEs). Boundary control is proposed at the top boundary of the riser based on Lyapunov's direct method to regulate the riser's vibration. Adaptive control is designed when the system parametric uncertainty exists. With the proposed robust adaptive boundary control, uniform boundedness under the ocean current disturbance can be achieved. The proposed control is implementable with actual instrumentation since all the required signals in the control can be measured by sensors or calculated by a backward difference algorithm. The state of the system is proven to converge to a small neighborhood of zero by appropriately choosing design parameters. Simulations are provided to illustrate the applicability and effectiveness of the proposed control.
Measuring the Gap Between FPGAs and ASICs ABSTRACT This paper presents experimental measurements of the differences between a 90nm CMOS FPGA and 90nm CMOS Standard Cell ASICs in terms of logic density, circuit speed and power consumption. We are motivated to make these measurements to enable system designers to make better informed choices between these two media and to give insight to FPGA makers on the deciencies to attack and thereby improve FPGAs. In the paper, we describe the methodology by which the measurements were obtained and we show that, for circuits containing only combinational logic and,ipops, the ratio of silicon area required to implement them in FPGAs and ASICs is on average 40. Modern FPGAs also contain \hard" blocks such as multiplier/accumulators and block memories,and we nd,that these blocks reduce this average area gap signican tly to as little as 21. The ratio of critical path delay, from FPGA to ASIC, is roughly 3 to 4, with less inuence from block memory and hard multipliers. The dynamic power consumption ratio is approximately 12 times and, with hard blocks, this gap generally becomes smaller. Categories and Subject Descriptors
Low-Power Programmable Gain CMOS Distributed LNA A design methodology for low power MOS distributed amplifiers (DAs) is presented. The bias point of the MOS devices is optimized so that the DA can be used as a low-noise amplifier (LNA) in broadband applications. A prototype 9-mW LNA with programmable gain was implemented in a 0.18-/spl mu/m CMOS process. The LNA provides a flat gain, S/sub 21/, of 8 /spl plusmn/ 0.6dB from DC to 6.2 GHz, with an...
Software radio architecture with smart antennas: a tutorial on algorithms and complexity There has been considerable interest in using antenna arrays in wireless communication networks to increase the capacity and decrease the cochannel interference. Adaptive beamforming with smart antennas at the receiver increases the carrier-to-interference ratio (CIR) in a wireless link. This paper considers a wireless network with beamforming capabilities at the receiver which allows two or more transmitters to share the same channel to communicate with the base station. The concrete computational complexity and algorithm structure of a base station are considered in terms of a software radio system model, initially with an omnidirectional antenna. The software radio computational model is then expanded to characterize a network with smart antennas. The application of the software radio smart antenna is demonstrated through two examples. First, traffic improvement in a network with a smart antenna is considered, and the implementation of a hand-off algorithm in the software radio is presented. The blocking probabilities of the calls and total carried traffic in the system under different traffic policies are derived. The analytical and numerical results show that adaptive beamforming at the receiver reduces the probability of blocking and forced termination of the calls and increases the total carried traffic in the system. Then, a joint beamforming and power control algorithm is implemented in a software radio smart antenna in a CDMA network. This shows that, by using smart antennas, each user can transmit with much lower power, and therefore the system capacity increases significantly
Communication-efficient failure detection and consensus in omission environments Failure detectors have been shown to be a very useful mechanism to solve the consensus problem in the crash failure model, for which a number of communication-efficient algorithms have been proposed. In this paper we deal with the definition, implementation and use of communication-efficient failure detectors in the general omission failure model, where processes can fail by crashing and by omitting messages when sending and/or receiving. We first define a new failure detector class for this model in terms of completeness and accuracy properties. Then we propose an algorithm that implements a failure detector of the proposed class in a communication-efficient way, in the sense that only a linear number of links are used to send messages forever. We also explain how the well-known consensus algorithm of Chandra and Toueg can be adapted in order to use the proposed failure detector.
A 15.5 dB, wide signal swing, dynamic amplifier using a common-mode voltage detection technique This paper presents a high-speed, low-power and wide signal swing differential dynamic amplifier using a common-mode voltage detection technique. The proposed dynamic amplifier achieves a 15.5 dB gain with less than 1 dB drop over a signal swing of 1.3 Vpp at an operating frequency of 1.5 GHz with a VDD of 1.2 V in 90 nm CMOS. The power consumption of the proposed circuit can be reduced linearly with operating frequency lowering.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.035949
0.035755
0.034979
0.034472
0.022549
0.018249
0.008464
0.000167
0
0
0
0
0
0
Dynamic Characteristics Preserving Data Compressing Algorithm for Transactive Energy Management Frameworks Several players are enabled in smart grids and involved in decision-making process. These players are fully interconnected in the transactive energy (TE) framework to operate the system optimally. By implementing the TE, the final decision are adopted after various consensus between participants. Hence, huge number of data and information are communicated in this framework which leads to complex communication algorithms and wide-bandwidth channels. Besides, the prosumers dataset in the TE framework are with low level of sparsity which is due to their independent interactions with peers and the grid. This article proves the low sparsity in the prosumers dataset and shows the shortcomings of the existing methods. A dynamic intelligent algorithm is proposed in this article to characterize the prosumers data based on the mutual information (MI) theorem. In addition, two data compression algorithms have been proposed in this article to reduce the bandwidth and space for communicating and storing purposes, respectively. The proposed algorithms for data modeling and compression provide compatible superlative performance with minimum information loss. This article saves more communication channel bandwidth by comparing the conventional methods. Fast and robust adaptation of the proposed algorithm facilitates the practical implementation of energy management in the TE framework when wide data transmission is needed. The performance of the proposed algorithms has been evaluated using simulated prosumers dataset and real-world residential and industrial consumer data.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Adaptive Sense Current Control For Dc-Dc Boost Converters To Get Accurate Voltage This study utilizes a new adaptive sense current controller to get an accurate power supply. The proposed controller effectively reduces output ripple voltage of converters operated over the load current range. This reduction is realized using an adaptive sense cut-rent circuit that automatically adjusts the inductor current according to operational conditions. The proposed boost converter is designed and fabricated with a standard TSMC 3.3/5 V 0.35-mu m 2P4M CMOS technology. The experimental results show that the power-conversion efficiency of the proposed boost converter is 2-5% higher than that of the conventional converter with a current-limited circuit. The proposed circuit greatly reduces (i.e. by 76%) output ripple voltage compared with the conventional circuit at a 10 mA loading current.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Multitone Feedback Through Demodulating Log Detector for Detection of Spurious Emissions in Software Radio This paper provides an analysis of a log detector in order to determine its response to a multitone input for detection of spurious emissions in a radio frequency transmitter. Treatment is given to the single tone response of the log detector and extended to a two-tone log detector system, where a large signal and a small signal are present. The large signal is observed to experience logarithmic p...
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A 2✖ Time-Interleaved 28-GS/s 8-Bit 0.03-mm<sup>2</sup> Switched-Capacitor DAC in 16-nm FinFET CMOS This article presents a compact 2× time-interleaved switched-capacitor (SC) digital-to-analog converter (DAC) for digital-intensive transmitter architectures. To minimize area and to leverage the strengths of FinFET technology, the implementation departs from the traditional current steering approach and consists mainly of inverters and sub-femtofarad SCs. The DAC's architecture is based on parallel charge redistribution and separates level generation, pulse timing, and output power generation. The described 28-GS/s 8-bit prototype design occupies 0.03 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> in 16-nm CMOS and supports up to 0.32- V <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> signal swing across its differential 100- Ω load. It achieves an SFDR ≥ 37 dB and an IM 3 ≤ -45.6 dBc across the first Nyquist zone while consuming 88 mW from a single 0.8-V supply.
The sliding DFT The sliding DFT process for spectrum analysis was presented and shown to be more efficient than the popular Goertzel (1958) algorithm for sample-by-sample DFT bin computations. The sliding DFT provides computational advantages over the traditional DFT or FFT for many applications requiring successive output calculations, especially when only a subset of the DFT output bins are required. Methods for output stabilization as well as time-domain data windowing by means of frequency-domain convolution were also discussed. A modified sliding DFT algorithm, called the sliding Goertzel DFT, was proposed to further reduce the computational workload. We start our sliding DFT discussion by providing a review of the Goertzel algorithm and use its behavior as a yardstick to evaluate the performance of the sliding DFT technique. We examine stability issues regarding the sliding DFT implementation as well as review the process of frequency-domain convolution to accomplish time-domain windowing. Finally, a modified sliding DFT structure is proposed that provides improved computational efficiency.
Derivative-free optimization: a review of algorithms and comparison of software implementations. This paper addresses the solution of bound-constrained optimization problems using algorithms that require only the availability of objective function values but no derivative information. We refer to these algorithms as derivative-free algorithms. Fueled by a growing number of applications in science and engineering, the development of derivative-free optimization algorithms has long been studied, and it has found renewed interest in recent time. Along with many derivative-free algorithms, many software implementations have also appeared. The paper presents a review of derivative-free algorithms, followed by a systematic comparison of 22 related implementations using a test set of 502 problems. The test bed includes convex and nonconvex problems, smooth as well as nonsmooth problems. The algorithms were tested under the same conditions and ranked under several criteria, including their ability to find near-global solutions for nonconvex problems, improve a given starting point, and refine a near-optimal solution. A total of 112,448 problem instances were solved. We find that the ability of all these solvers to obtain good solutions diminishes with increasing problem size. For the problems used in this study, TOMLAB/MULTIMIN, TOMLAB/GLCCLUSTER, MCS and TOMLAB/LGO are better, on average, than other derivative-free solvers in terms of solution quality within 2,500 function evaluations. These global solvers outperform local solvers even for convex problems. Finally, TOMLAB/OQNLP, NEWUOA, and TOMLAB/MULTIMIN show superior performance in terms of refining a near-optimal solution.
The analysis and improvement of a current-steering DACs dynamic SFDR-I: the cell-dependent delay differences For a high-accuracy current-steering digital-to-analog converters (DACs), the delay differences between the current sources is one of the major reasons that cause bad dynamic performance. In this paper, a mathematical model describing the impact of the delay differences on the DACs SFDR property is presented. The results are verified by comparison to behavioral-level simulations and to actual measurement data from published papers. Based on this analysis, the delay differences cancellation (DDC) technique to reduce the impact of the delay differences on the SFDR property is proposed and verified by simulation results.
Digital Background Calibration of a Split Current-Steering DAC A digital background calibration method for a current-steering digital-to-analog converter (DAC) is presented. The algorithm uses one comparator for calibration and corrects for current-source mismatch, which causes DAC nonlinearity. The DAC is split into two identical channels, with each channel consisting of a main DAC and an auxiliary DAC. The comparator examines the difference between the two channels to generate a binary error signal, which is used in a sign-LMS algorithm. Simulation results are presented to demonstrate the split-calibration algorithm.
A 112 Gb/s PAM-4 56 Gb/s NRZ Reconfigurable Transmitter With Three-Tap FFE in 10-nm FinFET This paper presents a reconfigurable 56 GS/s transmitter (TX) that operates up to 112 Gb/s with four-level pulse-amplitude modulation (PAM-4) and at 56 Gb/s with non-return-to-zero (NRZ) modulation scheme. Fabricated in the 10-nm FinFET technology, the TX incorporates a four-way interleaved quarter-rate architecture with a three-tap feed-forward equalizer (FFE). Key features of the TX include a 1-UI pulse-generator-based 4:1 serializer combined with a current-mode logic (CML) driver, low-power data-serializing paths, an output pad-network using a multi-segment <inline-formula xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\pi $ </tex-math></inline-formula> -coil for bandwidth co-optimization together with ESD diodes, sub-80-fs resolution duty-cycle detector/corrector (DCD/DCC) and quadrature-error detector/corrector (QED/QEC) circuits, and a hybrid <italic xmlns:xlink="http://www.w3.org/1999/xlink">LC</italic> -phase-locked loop (PLL) with quadrature clock distribution circuits. The TX operating at 112 Gb/s in PAM-4 modulation consumes 232 mW from 1- and 1.5-V supplies, achieving an 2.07 pJ/b energy efficiency. The TX front end occupies an area of 0.0302 mm <sup xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> .
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Chains of recurrences—a method to expedite the evaluation of closed-form functions Chains of Recurrences (CR's) are introduced as an effective method to evaluate functions at regular intervals. Algebraic properties of CR's are examined and an algorithm that constructs a CR for a given function is explained. Finally, an implementation of the method in MAXIMA/Common Lisp is discussed.
Consensus problems in networks of agents with switching topology and time-delays. In this paper, we discuss consensus problems for a network of dynamic agents with flxed and switching topologies. We analyze three cases: i) networks with switching topology and no time-delays, ii) networks with flxed topology and communication time-delays, and iii) max-consensus problems (or leader determination) for groups of discrete-time agents. In each case, we introduce a linear/nonlinear consensus protocol and provide convergence analysis for the proposed distributed algorithm. Moreover, we establish a connection between the Fiedler eigenvalue of the information ∞ow in a network (i.e. algebraic connectivity of the network) and the negotiation speed (or performance) of the corresponding agreement protocol. It turns out that balanced digraphs play an important role in addressing average-consensus problems. We intro- duce disagreement functions that play the role of Lyapunov functions in convergence analysis of consensus protocols. A distinctive feature of this work is to address consen- sus problems for networks with directed information ∞ow. We provide analytical tools that rely on algebraic graph theory, matrix theory, and control theory. Simulations are provided that demonstrate the efiectiveness of our theoretical results.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
Linear Amplification with Nonlinear Components A technique for producing bandpass linear amplification with nonlinear components (LINC) is described. The bandpass signal first is separated into two constant envelope component signals. All of the amplitude and phase information of the original bandpass signal is contained in phase modulation on the component signals. These constant envelope signals can be amplified or translated in frequency by amplifiers or mixers which have nonlinear input-output amplitude transfer characteristics. Passive linear combining of the amplified and/or translated component signals produces an amplified and/or translated replica of the original signal.
Recurrent-Fuzzy-Neural-Network-Controlled Linear Induction Motor Servo Drive Using Genetic Algorithms A recurrent fuzzy neural network (RFNN) controller based on real-time genetic algorithms (GAs) is developed for a linear induction motor (LIM) servo drive in this paper. First, the dynamic model of an indirect field-oriented LIM servo drive is derived. Then, an online training RFNN with a backpropagation algorithm is introduced as the tracking controller. Moreover, to guarantee the global convergence of tracking error, a real-time GA is developed to search the optimal learning rates of the RFNN online. The GA-based RFNN control system is proposed to control the mover of the LIM for periodic motion. The theoretical analyses for the proposed GA-based RFNN controller are described in detail. Finally, simulated and experimental results show that the proposed controller provides high-performance dynamic characteristics and is robust with regard to plant parameter variations and external load disturbance
3.4 A 36Gb/s PAM4 transmitter using an 8b 18GS/S DAC in 28nm CMOS At data rates beyond 10Gb/s, most wireline links employ NRZ signaling. Serial NRZ links as high as 56Gb/s and 60Gb/s have been reported [1]. Nevertheless, as the rate increases, the constraints imposed by the channel, package, and die become more severe and do not benefit from process scaling in the same fashion that circuit design does. Reflections from impedance discontinuities in the PCB and package caused by vias and connectors introduce significant signal loss and distortions at higher frequencies. Even with an ideal channel, at every package-die interface, there is an intrinsic parasitic capacitance due to the pads and the ESD circuit amounting to at least 150fF, and a 50Ω resistor termination at both the transmit and receive ends resulting in an intrinsic pole at 23GHz or lower. In light of all these limitations, serial NRZ signaling beyond 60Gb/s appears suboptimal in terms of both power and performance. Utilizing various modulation techniques such as PAM4, one can achieve a higher spectral efficiency [2]. To enable such transmission formats, high-speed moderate-resolution data converters are required. This paper describes a 36Gb/s transmitter based on an 18GS/s 8b DAC implemented in 28nm CMOS, compliant to the new IEEE802.3bj standard for 100G Ethernet over backplane and copper cables [3].
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.1
0.1
0.1
0.1
0.1
0.033333
0
0
0
0
0
0
0
0
Backups and the right to be forgotten in the GDPR: An uneasy relationship The recent enforcement of the GDPR has put extra burdens to data controllers operating within the EU. Beyond other challenges, the exercise of the Right to be Forgotten by individuals who request erasure of their personal information has also become a thorny issue when applied to backups and archives. In this paper, we discuss the GDPR forgetting requirements in respect with their impact on the backup and archiving procedures stipulated by the modern security standards. We specifically examine the implications of erasure requests on current IT backup systems and we highlight a number of envisaged organizational, business and technical challenges pertained to the widely known backup standards, data retention policies, backup mediums, search services, and ERP systems.
Software complexity measurement Inappropriate use of software complexity measures can have large, damaging effects by rewarding poor programming practices and demoralizing good programmers. Software complexity measures must be critically evaluated to determine the ways in which they can best be used.
Standards for XML and Web Services Security XML schemas convey the data syntax and semantics for various application domains, such as business-to-business transactions, medical records, and production status reports. However, these schemas seldom address security issues, which can lead to a worst-case scenario of systems and protocols with no security at all. At best, they confine security to transport level mechanisms such as secure sockets layer (SSL). On the other hand, the omission of security provisions from domain schemas opens the way for generic security specifications based on XML document and grammar extensions. These specifications are orthogonal to domain schemas but integrate with them to support a variety of security objectives, such as confidentiality, integrity, and access control. In 2002, several specifications progressed toward providing a comprehensive standards framework for secure XML-based applications. The paper shows some of the most important specifications, the issues they address, and their dependencies.
Architecture and design of adaptive object-models Many object-oriented information systems share an architectural style that emphasizes flexibility and run-time adaptability. Business rules are stored externally to the program such as in a database or XML files instead of in code. The object model that the user cares about is part of the database, and the object model of the code is just an interpreter of the users' object model. We call these systems "Adaptive Object-Models", because the users' object model is interpreted at runtime and can be changed with immediate (but controlled) effects on the system interpreting it. The real power in Adaptive Object-Models is that they have a definition of a domain model and rules for its integrity and can be configured by domain experts external to the execution of the program. This paper describes the Adaptive Object-Model architecture along with its strengths and weaknesses. It illustrates the Adaptive Object-Model architectural style by describing a framework for Medical Observations (following Fowler's Analysis Patterns) that we built.
Concurrent Data Materialization For Xml-Enabled Database With Semantic Metadata For a company with many databases in different data models, it is necessary to consolidate them into one interchangeable data model and present data in more than one data model concurrently to different users or individual users who need to access the data in more than one data model. The benefit is to let the user stick to his/her own data model to access database in another data model. This paper presents a semantic metadata to preserve database constraints for data materialization to support the user's view of database on an ad hoc basis. The semantic metadata can store the captured semantics of a relational or an XML-enabled database into classes. The stored constraints and data can be materialized into a target database upon user request. The user is allowed to perform data materialization many times alternatively. The process can provide a relational as well as an XML view to the users simultaneously. This concurrent data materialization function can be applied into data warehouse to consolidate heterogeneous database into a fact table in a data model of user's choice. Furthermore, a user can obtain either a relational view or an XML view of the same dataset of an XML-enabled database interchangeably.
Exploring the future of enterprise architecture: A Zachman perspective. •The Zachman Framework is used to identify and reflect on the future grand challenges of enterprise architecture.•Models and theories that could be useful in coping with the identified grand challenges are discussed.•Current advances in the field of EA that are guided by the discussed models and theories are presented in order to exemplify their value.•Futuristic scenarios for the evolution of enterprise architecture research, education and the enterprise architecture professions are presented.
Dependency-preserving normalization of relational and XML data Having a database design that avoids redundant information and update anomalies is the main goal of normalization techniques. Ideally, data as well as constraints should be preserved. However, this is not always achievable: while BCNF eliminates all redundancies, it may not preserve constraints, and 3NF, which achieves dependency preservation, may not always eliminate all redundancies. Our first goal is to investigate how much redundancy 3NF tolerates in order to achieve dependency preservation. We apply an information-theoretic measure and show that only prime attributes admit redundant information in 3NF, but their information content may be arbitrarily low. Then we study the possibility of achieving both redundancy elimination and dependency preservation by a hierarchical representation of relational data in XML. We provide a characterization of cases when an XML normal form called XNF guarantees both. Finally, we deal with dependency preservation in XML and show that like in the relational case, normalizing XML documents to achieve non-redundant data can result in losing constraints. By modifying the definition of XNF, we define another normal form for XML documents, X3NF, that generalizes 3NF for the case of XML and achieves dependency preservation.
Differential Power Analysis . Cryptosystem designers frequently assume that secrets willbe manipulated in closed, reliable computing environments. Unfortunately,actual computers and microchips leak information about the operationsthey process. This paper examines specific methods for analyzingpower consumption measurements to find secret keys from tamperresistant devices. We also discuss approaches for building cryptosystemsthat can operate securely in existing hardware that leaks information.Keywords:...
Searching in an unknown environment: an optimal randomized algorithm for the cow-path problem Searching for a goal is a central and extensively studied problem in computer science. In classical searching problems, the cost of a search function is simply the number of queries made to an oracle that knows the position of the goal. In many robotics problems, as well as in problems from other areas, we want to charge a cost proportional to the distance between queries (e.g., the time required to travel between two query points). With this cost function in mind, the abstract problem known as the w -lane cow-path problem was designed. There are known optimal deterministic algorithms for the cow-path problem; we give the first randomized algorithm in this paper. We show that our algorithm is optimal for two paths ( w =2) and give evidence that it is optimal for larger values of w . Subsequent to the preliminary version of this paper, Kao et al. ( in “Proceedings, 5th ACM–SIAM Symposium on Discrete Algorithm,” pp. 372–381, 1994) have shown that our algorithm is indeed optimal for all w ⩾2. Our randomized algorithm gives expected performance that is almost twice as good as is possible with a deterministic algorithm. For the performance of our algorithm, we also derive the asymptotic growth with respect to w —despite similar complexity results for related problems, it appears that this growth has never been analyzed.
Adaptive Synchronization of an Uncertain Complex Dynamical Network This brief paper further investigates the locally and globally adaptive synchronization of an uncertain complex dynamical network. Several network synchronization criteria are deduced. Especially, our hypotheses and designed adaptive controllers for network synchronization are rather simple in form. It is very useful for future practical engineering design. Moreover, numerical simulations are also given to show the effectiveness of our synchronization approaches.
Practical Mitigations for Timing-Based Side-Channel Attacks on Modern x86 Processors This paper studies and evaluates the extent to which automated compiler techniques can defend against timing-based side-channel attacks on modern x86 processors. We study how modern x86 processors can leak timing information through side-channels that relate to control flow and data flow. To eliminate key-dependent control flow and key-dependent timing behavior related to control flow, we propose the use of if-conversion in a compiler backend, and evaluate a proof-of-concept prototype implementation. Furthermore, we demonstrate two ways in which programs that lack key-dependent control flow and key-dependent cache behavior can still leak timing information on modern x86 implementations such as the Intel Core 2 Duo, and propose defense mechanisms against them.
Fully Integrated CMOS Power Amplifier With Efficiency Enhancement at Power Back-Off This paper presents a new approach for power amplifier design using deep submicron CMOS technologies. A transformer based voltage combiner is proposed to combine power generated from several low-voltage CMOS amplifiers. Unlike other voltage combining transformers, the architecture presented in this paper provides greater flexibility to access and control the individual amplifiers in a voltage comb...
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
Tiny Piezoelectric Harvesters: Principles, Constraints, and Power Conversion. Wireless microsystems can add intelligence to hospitals, homes, and factories that can save money, energy, and lives. Unfortunately, tiny batteries cannot store sufficient energy to sustain useful microsystems for long, and replacing or recharging the batteries of hundreds of networked nodes is costly and invasive in the case of the human body. Thankfully, shocks and vibrations are prevalent in many applications, so ambient kinetic energy can continually replenish batteries to extend the life of the systems they support. And since tiny devices produce minimal damping effects on motion, they can draw as much power as the microelectronics allow. Unfortunately, uncollected charge, breakdown voltages, and energy losses limit how much power harvesting microsystems can generate. This is why this paper reviews how tiny transducers generate power and how state-of-the-art diode bridges and switched inductors and their derivatives draw and output as much power as possible. Of prevailing technologies, in fact, the recycling bridge pre-damps the transducer at the highest voltage possible all the time to output the highest power. But because it still needs a regulating charger to stay at its maximum power point, other pre-damping switched inductors suffer lower losses and require less space. Although the pre-damping bridgeless solution pre-damps every other half cycle, it generates comparable power with only two switches. No harvester, however, escapes the limits that power losses and breakdown voltages impose, so output power is always finite, and in the case of miniaturized systems, not very high.
An Inductorless Bias-Flip Rectifier for Piezoelectric Energy Harvesting. Piezoelectric vibration energy harvesters have drawn much interest for powering self-sustained electronic devices. Furthermore, the continuous push toward miniaturization and higher levels of integration continues to form key drivers for autonomous sensor systems being developed as parts of the emerging Internet of Things (IoT) paradigm. The synchronized switch harvesting (SSH) on inductor and syn...
A 90.2% Peak Efficiency Multi-Input Single-Inductor Multi-Output Energy Harvesting Interface With Double-Conversion Rejection Technique and Buck-Based Dual-Conversion Mode This article presents a multi-input single-inductor multi-output energy-harvesting interface that extracts power from three independent sources and regulates three output voltages. The converter employs the proposed double-conversion rejection technique to reduce the double-converted power by up to 81.8% under the light-load condition and operates in various power conversion modes, including the proposed buck-based dual-conversion mode, to improve the power conversion efficiency and maximum load power. The proposed adaptive peak inductor current controller determines the inductor charging period, and the proposed digitally controlled zero-current detector detects the optimum zero-current point according to the operating mode. The proposed converter achieves a peak end-to-end efficiency of 90.2% and a maximum output power of 24 mW, indicating the improvements of approximately 7.52% and 1.85 times, respectively, compared with those of conventional buck-boost converters.
A Self-Powered P-SSHI Array Interface for Piezoelectric Energy Harvesters With Arbitrary Phase Difference Piezoelectric energy harvester (PEH) arrays are promising in many application scenarios. However, few interface circuits have been developed to manage the multiple ac inputs from PEHs. This article proposes an extensible parallel synchronized switch harvesting on inductor array interface scheme to realize a multiinput conversion from PEH arrays. We develop a split-inductor-capacitor topology that ...
Double Pile-Up Resonance Energy Harvesting Circuit for Piezoelectric and Thermoelectric Materials. This paper presents a double pile-up resonance energy harvesting circuit that efficiently and simultaneously extracts energy from a piezoelectric transducer (PZT) and a thermoelectric generator. The proposed harvester operates in a double pile-up mode (DPM) to efficiently extract energy from PZT with the enhanced damping force, resulting in a 1452% improvement in power extraction, which is the bes...
An autonomous piezoelectric energy harvesting IC based on a synchronous multi-shots technique This paper presents a fully autonomous integrated circuit (IC) dedicated to piezoelectric harvesters. The IC implements a novel Synchronous Electric Charge Extraction (SECE) technique which optimizes the energy transfer from a highly charged piezoelectric harvester to a low voltage storage element. The system deals with piezoelectric powers in the range of 10 μW to 1mW and handles very high piezoelectric voltage values (>100V) limited by the off-chip components around the IC. The IC has been fabricated in AMS 0.35 μ m3.3V technology and its low power consumption (1 μW @ 5Hz) is particularly suitable for low frequency harvesters. The Multi-Shots SECE (MS-SECE) technique implemented in the IC increases the efficiency by up to 25% compared to a standard SECE technique and allows the use of small off-chip components. An efficiency of 61% has been reached with a 125mm3coupled inductor @ 40V. Moreover, the complete system self-starts and works without any battery.
A novel control technique to eliminate output-voltage-ripple in switched-capacitor DC-DC converters A novel ripple mitigation technique is proposed for switched-capacitor voltage regulators (SCVR), which eliminates the output voltage ripple without using multi-phase interleaving. An inner control loop matches the SCVR's switch current to the load current on a cycle by cycle basis. A 2-phase 3:2 SCVR is designed in 45-nm CMOS process with the proposed control. For a 1.8 V to 1.05 V /40 mA converter, the proposed mitigation loop reduces the peak-to-peak output ripple from 330 mVp-p to 17 mVp-p, using total output capacitance of 4 nF/A. In addition, the proposed technique yields excellent regulation transient response.
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
Trellis-coded modulation with bit interleaving and iterative decoding This paper considers bit-interleaved coded modulation (BICM) for bandwidth-efficient transmission using software radios. A simple iterative decoding (ID) method with hard-decision feedback is suggested to achieve better performance. The paper shows that convolutional codes with good Hamming-distance property can provide both high diversity order and large free Euclidean distance for BICM-ID. The method offers a common framework for coded modulation over channels with a variety of fading statistics. In addition, BICM-ID allows an efficient combination of punctured convolutional codes and multiphase/level modulation, and therefore provides a simple mechanism for variable-rate transmission
A review of process fault detection and diagnosis: Part II: Qualitative models and search strategies In this part of the paper, we review qualitative model representations and search strategies used in fault diagnostic systems. Qualitative models are usually developed based on some fundamental understanding of the physics and chemistry of the process. Various forms of qualitative models such as causal models and abstraction hierarchies are discussed. The relative advantages and disadvantages of these representations are highlighted. In terms of search strategies, we broadly classify them as topographic and symptomatic search techniques. Topographic searches perform malfunction analysis using a template of normal operation, whereas, symptomatic searches look for symptoms to direct the search to the fault location. Various forms of topographic and symptomatic search strategies are discussed.
Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multi-agent coordination, estimation in sensor networks, and large-scale machine learning. We develop and analyze distributed algorithms based on dual subgradient averaging, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our analysis allows us to clearly separate the convergence of the optimization algorithm itself and the effects of communication dependent on the network structure. We show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network, and confirm this prediction's sharpness both by theoretical lower bounds and simulations for various networks. Our approach includes the cases of deterministic optimization and communication, as well as problems with stochastic optimization and/or communication.
A Linear Permanent-Magnet Motor for Active Vehicle Suspension Traditionally, automotive suspension designs with passive components have been a compromise between the three conflicting demands of road holding, load carrying, and passenger comfort. Linear electromagnetic motor-based active suspension has superior controllability and bandwidth, provides shock load isolation between the vehicle chassis and wheel, and, therefore, has great potential. It also has the ability to recover energy that is dissipated in the shock absorber in the passive systems and results in a much more energy-efficient suspension system. This paper describes the issues pertinent to the design of a high force density tubular permanent-magnet (PM) motor for active suspension in terms of performance optimization, the use of a solid stator core for low-cost production and its impact on thrust force, and the assessment of demagnetization risk.
Implementation of LTE SC-FDMA on the USRP2 software defined radio platform In this paper we discuss the implementation of a Single Carrier Frequency Division Multiple Access (SC-FDMA) transceiver running over the Universal Software Radio Peripheral 2 (USRP2). SC-FDMA is the air interface which has been selected for the uplink in the latest Long Term Evolution (LTE) standard. In this paper we derive an AWGN channel model for SC-FDMA transmission, which is useful for benchmarking experimental results. In our implementation, we deal with signal scaling, equalization and partial synchronization to realize SC-FDMA transmission over a noisy channel at rates up to 5.184 Mbit/s. Experimental results on the Bit Error Rate (BER) versus Signal-to-Noise Ratio (SNR) are presented and compared to theoretical and simulated performance.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.1055
0.1
0.1
0.1
0.05
0.029323
0.000106
0
0
0
0
0
0
0
A novel credibility-based group decision making method for Enterprise Architecture scenario analysis using Data Envelopment Analysis Alignment of IT and business.Enterprise Architecture (EA) analysis.p-Robust group DEA.Fuzzy credibility constrained programming DEA. Analysis and selection of Enterprise Architecture (EA) scenarios is a difficult and complex decision making process directly effecting the long-term business strategies realization. This complexity is associated with contradictory objectives and significant uncertainties involved in analysis process. Although a large body of intuitive and analytical models for EA analysis has evolved over the last few years, none of them leads to an efficient and optimized ranking in fuzzy environments. Moreover, it is necessary to simultaneously employ some complementary methods to reflect the ambiguity and vagueness as the main sources of uncertainty. This paper incorporates the concept of Data Envelopment Analysis (DEA) model into EA scenario analysis through a group analysis under uncertain conditions. To resolve the vagueness and ambiguity of the EA analysis, fuzzy credibility constrained programming and p-robustness technique are applied, respectively. Not only is the proposed DEA model linear, robust, and flexible in aggregating experts' opinion in a group decision making process, but it also is successful in discrimination power improvement - a major shortcoming associated with classic DEA model. The proposed model provides useful solutions to support decision making process for large-scale Information Technology (IT) development planning.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
High-speed analog-to-digital converters in downscaled CMOS High data-rate communications need high speed analog-to-digital converters. Recent flash and time interleaved SAR converters implemented in downscaled CMOS technologies have achieved GS/s conversion rates with very low power consumption. Flash ADCs can reach high speed with a single channel but the resolution is limited by exponential complexity and power consumption. SAR ADCs are well suited for higher resolution but, due to the sequential operation, require either massive interleaving or very fast technologies to achieve high speed. Hybrid architectures combine the advantages of different architectures to achieve the optimum compromise for a given resolution. In this paper the trade-offs between power, area and complexity for high speed designs are discussed and the potential of hybrid architectures is investigated.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An approach to testing specifications An approach to testing the consistency of specifications is explored, which is applicable to the design validation of communication protocols and other cases of step-wise refinement. In this approach, a testing module compares a trace of interactions obtained from an execution of the refined specification (e. g. the protocol specification) with the reference specification (e. g. the communication service specification). Non-determinism in reference specifications presents certain problems. Using an extended finite state transition model for the specifications, a strategy for limiting the amount of non-determinacy is presented. An automated method for constructing a testing module for a given reference specification is discussed. Experience with the application of this testing approach to the design of a Transport protocol and a distributed mutual exclusion algorithm is described.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
SimpleScalar: An Infrastructure for Computer System Modeling Designers can execute programs on software models to validate a proposed hardware design's performance and correctness, while programmerscan use these models to develop and test software before the real hardwarebecomes available. Three critical requirements drive the implementationof a software model: performance, flexibility, and detail.Performance determines the amount of workload the model can exercise given the machine resources available for simulation. Flexibility indicates how well the model is structured to simplify modification, permitting design variants or even completely different designs to be modeled with ease. Detail defines the level of abstraction used to implement the model's components.The SimpleScalar tool set provides an infrastructure for simulation and architectural modeling. It can model a variety of platforms ranging from simple unpipelined processors to detailed dynamically scheduled microarchitectures with multiple-level memory hierarchies. SimpleScalar simulators reproduce computing device operations by executing all program instructions using an interpreter.The tool set's instruction inter-complex modern machines and effectively manage the large software projects needed to model such machines. Asim addresses these needs by providing a modular and reusable framework for creating many models. The framework's modularity helps break down the performance-modeling problem into individual pieces that can be modeled separately, while its reusability allows using a software component repeatedly in different contexts.
Latency Sensitive FMA Design The implementation of merged floating-point multiply-add operations can be optimized in many ways. For latency sensitive applications, our cascade design reduces the accumulation dependent latency by 2x over a fused design, at a cost of a 13% increase in non-accumulation dependent latency. A simple in-order execution model shows this design is superior in most applications, providing 12% average reduction in FP stalls, and improves performance by up to 6%. Simulations of superscalar out-of-order machines show 4% average improvement in CPI in 2-way machines and 4.6% in 4-way machines. The cascade design has the same area and energy budget as a traditional fused multiple-add FMA.
Full Speed Ahead: Detailed Architectural Simulation at Near-Native Speed Cycle-level micro architectural simulation is the de-facto standard to estimate performance of next-generation platforms. Unfortunately, the level of detail needed for accurate simulation requires complex, and therefore slow, simulation models that run at speeds that are thousands of times slower than native execution. With the introduction of sampled simulation, it has become possible to simulate only the key, representative portions of a workload in a reasonable amount of time and reliably estimate its overall performance. These sampling methodologies provide the ability to identify regions for detailed execution, and through micro architectural state check pointing, one can quickly and easily determine the performance characteristics of a workload for a variety of micro architectural changes. While this strategy of sampling simulations to generate checkpoints performs well for static applications, more complex scenarios involving hardware-software co-design (such as co-optimizing both a Java virtual machine and the micro architecture it is running on) cause this methodology to break down, as new micro architectural checkpoints are needed for each memory hierarchy configuration and software version. Solutions are therefore needed to enable fast and accurate simulation that also address the needs of hardware-software co-design and exploration. In this work we present a methodology to enhance checkpoint-based sampled simulation. Our solution integrates hardware virtualization to provide near-native speed, virtualized fast-forwarding to regions of interest, and parallel detailed simulation. However, as we cannot warm the simulated caches during virtualized fast-forwarding, we develop a novel approach to estimate the error introduced by limited cache warming, through the use of optimistic and pessimistic warming simulations. Using virtualized fast-forwarding (which operates at 90% of native speed on average), we demonstrate a parallel sampling simulator that can be u- ed to accurately estimate the IPC of standard workloads with an average error of 2.2% while still reaching an execution rate of 2.0 GIPS (63% of native) on average. Additionally, we demonstrate that our parallelization strategy scales almost linearly and simulates one core at up to 93% of its native execution rate, 19,000x faster than detailed simulation, while using 8 cores.
Golden Gate: Bridging The Resource-Efficiency Gap Between ASICs and FPGA Prototypes We present Golden Gate, an FPGA-based simulation tool that decouples the timing of an FPGA host platform from that of the target RTL design. In contrast to previous work in static time-multiplexing of FPGA resources, Golden Gate employs the Latency-Insensitive Bounded Dataflow Network (LI-BDN) formalism to decompose the simulator into subcomponents, each of which may be independently and automatically optimized. This structure allows Golden Gate to support a broad class of optimizations that improve resource utilization by implementing FPGA-hostile structures over multiple cycles, while the LI-BDN formalism ensures that the simulator still produces bit- and cycle-exact results. To verify that these optimizations are implemented correctly, we also present LIME, a model-checking tool that provides a push-button flow for checking whether optimized subcomponents adhere to an associated correctness specification, while also guaranteeing forward progress. Finally, we use Golden Gate to generate a cycle-exact simulator of a multi-core SoC, where we reduce LUT utilization by up to 26% by coercing multi-ported, combinationally read memories into simulation models backed by time-multiplexed block RAMs, enabling us to simulate 50% more cores on a single FPGA.
Hardware Design with a Scripting Language The Python Hardware Description Language (PyHDL) provides a scripting interface to object-oriented hardware design in C++. PyHDL uses the PamDC and PAM-Blox libraries to generate FPGA circuits. The main advantage of scripting languages is a reduction in development time for high-level designs. We propose a two-step approach: first, use scripting to explore effects of composition and parameterisation; second, convert the scripted designs into compiled components for performance. Our results show that, for small designs, our method offers 5 to 7 times improvement in turnaround time. For a large 10x10 matrix vector multiplier, our method offers respectively 365% and 19% improvement in turnaround time over purely scripting and purely compiled methods.
Randomized algorithms This text by two well-known experts in the field presents the basic concepts in the design and analysis of randomized algorithms at a level accessible to beginning graduate students, professionals and researchers.
Building efficient wireless sensor networks with low-level naming In most distributed systems, naming of nodes for low-level communication leverages topological location (such as node addresses) and is independent of any application. In this paper, we investigate an emerging class of distributed systems where low-level communication does not rely on network topological location. Rather, low-level communication is based on attributes that are external to the network topology and relevant to the application. When combined with dense deployment of nodes, this kind of named data enables in-network processing for data aggregation, collaborative signal processing, and similar problems. These approaches are essential for emerging applications such as sensor networks where resources such as bandwidth and energy are limited. This paper is the first description of the software architecture that supports named data and in-network processing in an operational, multi-application sensor-network. We show that approaches such as in-network aggregation and nested queries can significantly affect network traffic. In one experiment aggregation reduces traffic by up to 42% and nested queries reduce loss rates by 30%. Although aggregation has been previously studied in simulation, this paper demonstrates nested queries as another form of in-network processing, and it presents the first evaluation of these approaches over an operational testbed.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
An artificial neural network (p,d,q) model for timeseries forecasting Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed.
Efficiency of a Regenerative Direct-Drive Electromagnetic Active Suspension. The efficiency and power consumption of a direct-drive electromagnetic active suspension system for automotive applications are investigated. A McPherson suspension system is considered, where the strut consists of a direct-drive brushless tubular permanent-magnet actuator in parallel with a passive spring and damper. This suspension system can both deliver active forces and regenerate power due to imposed movements. A linear quadratic regulator controller is developed for the improvement of comfort and handling (dynamic tire load). The power consumption is simulated as a function of the passive damping in the active suspension system. Finally, measurements are performed on a quarter-car test setup to validate the analysis and simulations.
The real-time segmentation of indoor scene based on RGB-D sensor The vision system of the mobile robot is a low-level function that provides the required target information of the current environment for the upper vision tasks. The real-time performance and robustness of object segmentation in cluttered environments is still a serious problem in robot visions. In this paper, a new real-time indoor scene segmentation method based on RGB-D image, is presented and the extracted primary object regions are then used for object recognition. Firstly, this paper accomplishes the depth filtering by the improved traditional filtering method. Then by using improved depth information, the algorithm extracts the foreground and implements the object segmentation of color image at the resolution of 640×480 from Kinect camera. Finally, the segmentation results are applied into the object recognition in indoor scene to validate the effectiveness of scene segmentation. The results of indoor segmentation demonstrate the real-time performance and robustness of the proposed method. In addition, the results of segmentation improve the accuracy of object recognition and reduce time of object recognition in indoor cluttered scene.
A 0.5 V 10-bit 3 MS/s SAR ADC With Adaptive-Reset Switching Scheme and Near-Threshold Voltage-Optimized Design Technique This brief presents a 10-bit ultra-low power energy-efficient successive approximation register (SAR) analog-to-digital converter (ADC). A new adaptive-reset switching scheme is proposed to reduce the switching energy of the capacitive digital-to-analog converter (CDAC). The proposed adaptive-reset switching scheme reduces the average switching energy of the CDAC by 90% compared to the conventional scheme without the common-mode voltage variation. In addition, the near-threshold voltage (NTV)-optimized digital library is adopted to alleviate the performance degradation in the ultra-low supply voltage while simultaneously increasing the energy efficiency. The NTV-optimized design technique is also introduced to the bootstrapped switch design to improve the linearity of the sample-and-hold circuit. The test chip is fabricated in a 65 nm CMOS, and its core area is 0.022 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . At a supply of 0.5 V and sampling speed of 3 MS/s, the SAR ADC achieves an ENOB of 8.78 bit and consumes <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3.09~{\boldsymbol{\mu }}\text{W}$ </tex-math></inline-formula> . The resultant Walden figure-of-merit (FoM) is 2.34 fJ/conv.-step.
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
Demystifying Fog Computing: Characterizing Architectures, Applications and Abstractions Internet of Things (IoT) has accelerated the deployment of millions of sensors at the edge of the network, through Smart City infrastructure and lifestyle devices. Cloud computing platforms are often tasked with handling these large volumes and fast streams of data from the edge. Recently, Fog computing has emerged as a concept for low-latency and resource-rich processing of these observation streams, to complement Edge and Cloud computing. In this paper, we review various dimensions of system architecture, application characteristics and platform abstractions that are manifest in this Edge, Fog and Cloud eco-system. We highlight novel capabilities of the Edge and Fog layers, such as physical and application mobility, privacy sensitivity, and a nascent runtime environment. IoT application case studies based on first-hand experiences across diverse domains drive this categorization. We also highlight the gap between the potential and the reality of Fog computing, and identify challenges that need to be overcome for the solution to be sustainable. Taken together, our article can help platform and application developers bridge the gap that remains in making Fog computing viable.
State Machine Replication for the Masses with BFT-SMART The last fifteen years have seen an impressive amount of work on protocols for Byzantine fault-tolerant (BFT) state machine replication (SMR). However, there is still a need for practical and reliable software libraries implementing this technique. BFT-SMART is an open-source Java-based library implementing robust BFT state machine replication. Some of the key features of this library that distinguishes it from similar works (e.g., PBFT and UpRight) are improved reliability, modularity as a first-class property, multicore-awareness, reconfiguration support and a flexible programming interface. When compared to other SMR libraries, BFT-SMART achieves better performance and is able to withstand a number of real-world faults that previous implementations cannot.
An Object Store Service for a Fog/Edge Computing Infrastructure Based on IPFS and a Scale-Out NAS Fog and Edge Computing infrastructures have been proposed to address the latency issue of the current Cloud Computing platforms. While a couple of works illustrated the advantages of these infrastructures in particular for the Internet of Things (IoT) applications, elementary Cloud services that can take advantage of the geo-distribution of resources have not been proposed yet. In this paper, we propose a first-class object store service for Fog/Edge facilities. Our proposal is built with Scale-out Network Attached Storage systems (NAS) and IPFS, a BitTorrent-based object store spread throughout the Fog/Edge infrastructure. Without impacting the IPFS advantages particularly in terms of data mobility, the use of a Scale-out NAS on each site reduces the inter-site exchanges that are costly but mandatory for the metadata management in the original IPFS implementation. Several experiments conducted on Grid'5000 testbed are analyzed and confirmed, first, the benefit of using an object store service spread at the Edge and second, the importance of mitigating inter-site accesses. The paper concludes by giving a few directions to improve the performance and fault tolerance criteria of our Fog/Edge Object Store Service.
SA-Chord: A Self-Adaptive P2P Overlay Network Pure Edge Computing relies on peer-to-peer overlay networks to realize the communication backbone between participating entities. In these settings, entities are characterized by high heterogeneity, mobility, and variability, which introduce runtime uncertainty and may harm the dependability of the network. Departing from state-of-the-art solutions, overlay networks for Pure Edge Computing should take into account the dynamics of the operating environment and self-adapt their topology accordingly, in order to increase the dependability of the communication. To this end, this paper discusses the preliminary development and validation of SA-Chord, a self-adaptive version of the wellknown Chord protocol, able to adapt the network topology according to a given global goal. SA-Chord has been validated through simulation against two distinct goals: (i) minimize energy consumption and, (ii) maximize network throughput. Simulation results are promising and show how SA-Chord efficiently and effectively achieves a given goal.
A proposal of a distributed access control over Fog computing: The ITS use case Internet of Things (IoT) raises many security challenges in relation with the different applications that can be deployed over these environments. IoT access control systems must respond to the new IoT requirements such as scalability, dynamicity, real-time interaction and resources constraint. The goal of this paper is to propose an approach based on Fog and Distributed Hash Table (DHT) toward access control for the Internet of Things. To evaluate the performances of our access solution, we used NS-3 and SUMO. The preliminary obtained results show acceptable overhead for the considered Intelligent Transport System (ITS) scenario.
Fog Computing: Helping the Internet of Things Realize Its Potential. The Internet of Things (IoT) could enable innovations that enhance the quality of life, but it generates unprecedented amounts of data that are difficult for traditional systems, the cloud, and even edge computing to handle. Fog computing is designed to overcome these limitations.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Leveraging on-chip voltage regulators as a countermeasure against side-channel attacks Side-channel attacks have become a significant threat to the integrated circuit security. Circuit level techniques are proposed in this paper as a countermeasure against side-channel attacks. A distributed on-chip power delivery system consisting of multi-level switched capacitor (SC) voltage converters is proposed where the individual interleaved stages are turned on and turned off either based on the workload information or pseudo-randomly to scramble the power consumption profile. In the case that the changes in the workload demand do not trigger the power delivery system to turn on or off individual stages, the active stages are reshuffled with so called converter-reshuffling to insert random spikes in the power consumption profile. An entropy based metric is developed to evaluate the security-performance of the proposed converter-reshuffling technique as compared to three other existing on-chip power delivery schemes. The increase in the power trace entropy with CoRe scheme is also demonstrated with simulation results to further verify the theoretical analysis.
The PARSEC benchmark suite: characterization and architectural implications This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
Linear Amplification with Nonlinear Components A technique for producing bandpass linear amplification with nonlinear components (LINC) is described. The bandpass signal first is separated into two constant envelope component signals. All of the amplitude and phase information of the original bandpass signal is contained in phase modulation on the component signals. These constant envelope signals can be amplified or translated in frequency by amplifiers or mixers which have nonlinear input-output amplitude transfer characteristics. Passive linear combining of the amplified and/or translated component signals produces an amplified and/or translated replica of the original signal.
Opportunistic Information Dissemination in Mobile Ad-hoc Networks: The Profit of Global Synchrony The topic of this paper is the study of Information Dissemination in Mobile Ad-hoc Networks by means of deterministic protocols. We characterize the connectivity resulting from the movement, from failures and from the fact that nodes may join the computation at different times with two values, � and �, so that, withintime slots, some node that has the information must be connected to some node without it for at leasttime slots. The protocols studied are clas- sified into three classes: oblivious (the transmission schedule of a node is only a function of its ID), quasi-oblivious (the transmission schedule may also depend on a global time), and adaptive. The main contribution of this work concerns negative results. Contrasting the lower and upper bounds derived, interesting complexity gaps among protocol- classes are observed. More precisely, in order to guarantee any progress towards solving the problem, it is shown thatmust be at least n 1 in general, but that � 2 (n 2 /log n) if an oblivious protocol is used. Since quasi-oblivious protocols can guarantee progress with � 2 O(n), this represents a significant gap, almost linear in �, between oblivious and quasi-oblivious protocols. Regarding the time to complete the dissemination, a lower bound of (n� + n 3 /log n) is proved for oblivious protocols, which is tight up to a polylogarithmic factor because a constructive O(n� + n 3 log n) upper bound exists for the same class. It is also proved that adaptive protocols require (n� + n 2 ), which is optimal given that a matching upper bound can be proved for quasi-oblivious protocols. These results show that the gap in time complexity between oblivious and quasi- oblivious, and hence adaptive, protocols is almost linear. This gap is what we call the profit of global synchrony, since it represents the gain the network obtains from global synchrony with respect to not having it.
Towards elastic SDR architectures using dynamic task management. SDR platforms integrating several types and numbers of processing elements in System-on-Chips become an attractive solution for baseband processing in wireless systems. In order to cope with the diversity of protocol applications and the heterogeneity of multi-core architectures, a hierarchical approach for workload distribution is proposed in this paper. Specifically, a system-level scheduler is employed to map applications to multiple processing clusters, complemented with a cluster-level scheduler - the CoreManager - for dynamic resource allocation and configuration as well as for task and data scheduling. A performance analysis of the proposed approach is presented, which shows the advantages of dynamic scheduling against a static approach for variable workloads in the LTE-Advanced uplink multi-user scenarios.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.2
0.2
0.2
0.2
0.2
0.04
0
0
0
0
0
0
0
0
Distributed multi-agent optimization with state-dependent communication We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. We study a projected multi-agent subgradient algorithm under state-dependent communication. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a “disagreement metric” between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence.
Multiple agent-based autonomy for satellite constellations There is an increasing desire to use constellations of au- tonomous spacecraft working together to accomplish complex mission objectives. Multiple, highly autonomous, satellite systems are envisioned because they are capable of higher performance, lower cost, better fault tolerance, recongurability and upgradability. This paper presents an ar- chitecture and multi-agent design and simulation environment that will enable agent-based multi-satellite systems to fulll their complex mis- sion objectives, termed TeamAgentTM. Its application is shown for Tech- Sat21, a U.S. Air Force mission designed to explore the benets of dis- tributed satellite systems. Required spacecraft functions, software agents, and multi-agent organisations are described for the TechSat21 mission, as well as their implementation. Agent-based simulations of TechSat21 case studies show the autonomous operation and how TeamAgent can be used to evaluate and compare multi agent-based organisations.
A distributed multiple dimensional QoS constrained resource scheduling optimization policy in computational grid This paper is to solve efficient QoS based resource scheduling in computational grid. It defines a set of QoS dimensions with utility function for each dimensions, uses a market model for distributed optimization to maximize the global utility. The user specifies its requirement by a utility function. A utility function can be specified for each QoS dimension. In the grid, grid task agent acted as consumer pay for the grid resource and resource providers get profits from task agents. The task agent' utility can then be defined as a weighted sum of single-dimensional QoS utility function. QoS based grid resource scheduling optimization is decomposed to two subproblems: joint optimization of resource user and resource provider in grid market. An iterative multiple QoS scheduling algorithm that is used to perform optimal multiple QoS based resource scheduling. The grid users propose payment for the resource providers, while the resource providers set a price for each resource. The experiments show that optimal QoS based resource scheduling involves less overhead and leads to more efficient resource allocation than no optimal resource allocation.
Distributed State Estimation of Sensor-Network Systems Subject to Markovian Channel Switching With Application to a Chemical Process. This paper addresses a distributed estimator design problem for linear systems deployed over sensor networks within a multiple communication channels (MCCs) framework. A practical scenario is taken into account such that the channel used for communication can be switched and the switching is governed by a Markov chain. With the existence of communicational imperfections and external disturbances, ...
A Continuous-Time Algorithm for Distributed Optimization Based on Multiagent Networks Based on the multiagent networks, this paper introduces a continuous-time algorithm to deal with distributed convex optimization. Using nonsmooth analysis and algebraic graph theory, the distributed network algorithm is modeled by the aid of a nonautonomous differential inclusion, and each agent exchanges information from the first-order and the second-order neighbors. For any initial point, the solution of the proposed network can reach consensus to the set of minimizers if the graph has a spanning tree. In contrast to the existing continuous-time algorithms for distributed optimization, the proposed model holds the least number of state variables and relaxes the strongly connected weighted-balanced topology to the weaker case. The modified form of the proposed continuous-time algorithm is also given, and it is proven that this algorithm is suitable for solving distributed problems if the undirected network is connected. Finally, two numerical examples and an optimal placement problem confirm the effectiveness of the proposed continuous-time algorithm.
Constrained Consensus and Optimization in Multi-Agent Networks We present distributed algorithms that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity. Our framework is general in that this value can represent a consensus value among multiple agents or an optimal solution of an optimization problem, where the global objective function is a combination of local agent objective functions. Our main focus is on constrained problems where the estimate of each agent is restricted to lie in a different constraint set. To highlight the effects of constraints, we first consider a constrained consensus problem and present a distributed ``projected consensus algorithm'' in which agents combine their local averaging operation with projection on their individual constraint sets. This algorithm can be viewed as a version of an alternating projection method with weights that are varying over time and across agents. We establish convergence and convergence rate results for the projected consensus algorithm. We next study a constrained optimization problem for optimizing the sum of local objective functions of the agents subject to the intersection of their local constraint sets. We present a distributed ``projected subgradient algorithm'' which involves each agent performing a local averaging operation, taking a subgradient step to minimize its own objective function, and projecting on its constraint set. We show that, with an appropriately selected stepsize rule, the agent estimates generated by this algorithm converge to the same optimal solution for the cases when the weights are constant and equal, and when the weights are time-varying but all agents have the same constraint set.
Mirror descent and nonlinear projected subgradient methods for convex optimization The mirror descent algorithm (MDA) was introduced by Nemirovsky and Yudin for solving convex optimization problems. This method exhibits an efficiency estimate that is mildly dependent in the decision variables dimension, and thus suitable for solving very large scale optimization problems. We present a new derivation and analysis of this algorithm. We show that the MDA can be viewed as a nonlinear projected-subgradient type method, derived from using a general distance-like function instead of the usual Euclidean squared distance. Within this interpretation, we derive in a simple way convergence and efficiency estimates. We then propose an Entropic mirror descent algorithm for convex minimization over the unit simplex, with a global efficiency estimate proven to be mildly dependent in the dimension of the problem.
Synchronization in complex networks of phase oscillators: A survey. The emergence of synchronization in a network of coupled oscillators is a fascinating subject of multidisciplinary research. This survey reviews the vast literature on the theory and the applications of complex oscillator networks. We focus on phase oscillator models that are widespread in real-world synchronization phenomena, that generalize the celebrated Kuramoto model, and that feature a rich phenomenology. We review the history and the countless applications of this model throughout science and engineering. We justify the importance of the widespread coupled oscillator model as a locally canonical model and describe some selected applications relevant to control scientists, including vehicle coordination, electric power networks, and clock synchronization. We introduce the reader to several synchronization notions and performance estimates. We propose analysis approaches to phase and frequency synchronization, phase balancing, pattern formation, and partial synchronization. We present the sharpest known results about synchronization in networks of homogeneous and heterogeneous oscillators, with complete or sparse interconnection topologies, and in finite-dimensional and infinite-dimensional settings. We conclude by summarizing the limitations of existing analysis methods and by highlighting some directions for future research.
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
An ultra-wideband CMOS low noise amplifier for 3-5-GHz UWB system An ultra-wideband (UWB) CMOS low noise amplifier (LNA) topology that combines a narrowband LNA with a resistive shunt-feedback is proposed. The resistive shunt-feedback provides wideband input matching with small noise figure (NF) degradation by reducing the Q-factor of the narrowband LNA input and flattens the passband gain. The proposed UWB amplifier is implemented in 0.18-/spl mu/m CMOS technol...
Variability in TCP round-trip times We measured and analyzed the variability in round trip times (RTTs) within TCP connections using passive measurement techniques. We collected eight hours of bidirectional traces containing over 22 million TCP connections between end-points at a large university campus and almost $1$ million remote locations. Of these, we used over 1 million TCP connections that yield 10 or more valid RTT samples, to examine RTT variability within a TCP connection. Our results indicate that contrary to observations in several previous studies, RTT values within a connection vary widely. Our results have implications for designing better simulation models, and understanding how round trip times affect the dynamic behavior and throughput of TCP connections.
Clocking Analysis, Implementation and Measurement Techniques for High-Speed Data Links—A Tutorial The performance of high-speed wireline data links depend crucially on the quality and precision of their clocking infrastructure. For future applications, such as microprocessor systems that require terabytes/s of aggregate bandwidth, signaling system designers will have to become even more aware of detailed clock design tradeoffs in order to jointly optimize I/O power, bandwidth, reliability, silicon area and testability. The goal of this tutorial is to assist I/O circuit and system designers in developing intuitive and practical understanding of I/O clocking tradeoffs at all levels of the link hierarchy from the circuit-level implementation to system-level architecture.
PuDianNao: A Polyvalent Machine Learning Accelerator Machine Learning (ML) techniques are pervasive tools in various emerging commercial applications, but have to be accommodated by powerful computer systems to process very large data. Although general-purpose CPUs and GPUs have provided straightforward solutions, their energy-efficiencies are limited due to their excessive supports for flexibility. Hardware accelerators may achieve better energy-efficiencies, but each accelerator often accommodates only a single ML technique (family). According to the famous No-Free-Lunch theorem in the ML domain, however, an ML technique performs well on a dataset may perform poorly on another dataset, which implies that such accelerator may sometimes lead to poor learning accuracy. Even if regardless of the learning accuracy, such accelerator can still become inapplicable simply because the concrete ML task is altered, or the user chooses another ML technique. In this study, we present an ML accelerator called PuDianNao, which accommodates seven representative ML techniques, including k-means, k-nearest neighbors, naive bayes, support vector machine, linear regression, classification tree, and deep neural network. Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, PuDianNao can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm^2, and consumes 596 mW only. Compared with the NVIDIA K20M GPU (28nm process), PuDianNao (65nm process) is 1.20x faster, and can reduce the energy by 128.41x.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.054965
0.05
0.05
0.05
0.0155
0.008303
0.001382
0.00035
0
0
0
0
0
0
Finite-time stabilization for a class of nonlinear systems via optimal control. In general, finite-time stabilization techniques can always stabilize a system if control cost is not considered. Considering the fact that control cost is a very important factor in control area, we investigate finite-time stabilization problem for a class of nonlinear systems in this paper, where the control cost can also be reduced. We formulate this problem into an optimal control problem, where the control functions are optimized such that the system can be stabilized with minimum control cost. Then, the control parameterization enhancing transform and the control parameterization method are applied to solve this problem. Two numerical examples are illustrated to show the effectiveness of the proposed method.
Finite-time stabilization by state feedback control for a class of time-varying nonlinear systems. In this paper, finite-time stabilization is considered for a class of nonlinear systems dominated by a lower-triangular model with a time-varying gain. Based on the finite-time Lyapunov stability theorem and dynamic gain control design approach, state feedback finite-time stabilization controllers are proposed with gains being tuned online by two dynamic equations. Different from many existing finite-time control designs for lower-triangular nonlinear systems, the celebrated backstepping method is not utilized here. It is observed that our design procedure is much simpler, and the resulting control gains are in general not as high as those provided by the backstepping method. A simulation example is given to demonstrate the effectiveness of the proposed design procedure.
Robust stability of hopfield delayed neural networks via an augmented L-K functional. This paper focuses on the issue of robust stability of artificial delayed neural networks. A free-matrix-based inequality strategy is produced by presenting an arrangement of slack variables, which can be optimized by means of existing convex optimization algorithms. To reflect a large portion of the dynamical behaviors of the framework, uncertain parameters are considered. By constructing an augmented Lyapunov functional, sufficient conditions are derived to guarantee that the considered neural systems are completely stable. The conditions are presented in the form of as linear matrix inequalities (LMIs). Finally, numerical cases are given to show the suitability of the results presented.
Robust Finite-Time Stabilization of Fractional-Order Neural Networks With Discontinuous and Continuous Activation Functions Under Uncertainty. This paper is concerned with robust finite-time stabilization for a class of fractional-order neural networks (FNNs) with two types of activation functions (i.e., discontinuous and continuous activation function) under uncertainty. It is worth noting that there exist few results about FNNs with discontinuous activation functions, which is mainly because classical solutions and theories of differen...
Quaternion-Valued Twin-Multistate Hopfield Neural Networks With Dual Connections Dual connections (DCs) utilize the noncommutativity of quaternions and improve the noise tolerance of quaternion Hopfield neural networks (QHNNs). In this article, we introduce DCs to twin-multistate QHNNs. We conduct computer simulations to investigate the noise tolerance. The QHNNs with DCs were weak against an increase in the number of training patterns, but they were robust against increased resolution factor. The simulation results can be explained from the standpoints of storage capacities and rotational invariance.
A Fuzzy Lyapunov Function Method to Stability Analysis of Fractional-Order T–S Fuzzy Systems This article investigates the stability analysis and stabilization problems for fractional-order T–S fuzzy systems via fuzzy Lyapunov function method. A membership-function-dependent fuzzy Lyapunov function instead of the general quadratic Lyapunov function is employed to obtain the stability and stabilization criteria. Different from the general quadratic Lyapunov function, the fuzzy Lyapunov functions contain the product of three term functions. Since the general Leibniz formula cannot be satisfied for fractional derivative, the current results on the fractional derivative for the quadratic Lyapunov functions cannot be extended to the fuzzy Lyapunov functions. Therefore, to estimate the fractional derivative of fuzzy Lyapunov functions, the fractional derivative rule for the product of three term functions is proposed. Based on the proposed fractional derivative rule, the corresponding stability and stabilization criteria are established, which extend the existing results. Finally, two simulation examples are presented to illustrate the effectiveness of the proposed theoretical analysis.
Finite-time synchronization of nonidentical BAM discontinuous fuzzy neural networks with delays and impulsive effects via non-chattering quantized control •Two new inequalities are developed to deal with the mismatched coefficients of the fuzzy part.•A simple but robust quantized state feedback controller is designed to overcome the effects of discontinuous activations, time delay, and nonidentical coefficients simultaneously. The designed control schemes do not utilize the sign function and can save channel resources. Moreover, novel non-chattering quantized adaptive controllers are also considered to reduce the control cost.•By utilizing 1-norm analytical technique and comparison system method, the effect of impulses on the FTS is well coped with.•Without utilizing the finite-time stability theorem in [16], several FTS criteria are obtained. Moreover, the settling time is explicitly estimated. Results of this paper can easily be extended to FTS of other classical delayed impulsive NNs with or without nonidentical coefficients.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Cellular Logic-in-Memory Arrays As a direct consequence of large-scale integration, many advantages in the design, fabrication, testing, and use of digital circuitry can be achieved if the circuits can be arranged in a two-dimensional iterative, or cellular, array of identical elementary networks, or cells. When a small amount of storage is included in each cell, the same array may be regarded either as a logically enhanced memory array, or as a logic array whose elementary gates and connections can be "programmed" to realize a desired logical behavior.
On implementing omega with weak reliability and synchrony assumptions We study the feasibility and cost of implementing Ω---a fundamental failure detector at the core of many algorithms---in systems with weak reliability and synchrony assumptions. Intuitively, Ω allows processes to eventually elect a common leader. We first give an algorithm that implements Ω in a weak system S where processes are synchronous, but: (a) any number of them may crash, and (b) only the output links of an unknown correct process are eventually timely (all other links can be asynchronous and/or lossy). This is in contrast to previous implementations of Ω which assume that a quadratic number of links are eventually timely, or systems that are strong enough to implement the eventually perfect failure detector P. We next show that implementing Ω in S is expensive: even if we want an implementation that tolerates just one process crash, all correct processes (except possibly one) must send messages forever; moreover, a quadratic number of links must carry messages forever. We then show that with a small additional assumption---the existence of some unknown correct process whose asynchronous links are lossy but fair---we can implement Ω efficiently: we give an algorithm for Ω such that eventually only one process (the elected leader) sends messages.
Bandwidth-efficient management of DHT routing tables Today an application developer using a distributed hash table (DHT) with n nodes must choose a DHT protocol from the spectrum between O(1) lookup protocols [9, 18] and O(log n) protocols [20-23, 25, 26]. O(1) protocols achieve low latency lookups on small or low-churn networks because lookups take only a few hops, but incur high maintenance traffic on large or high-churn networks. O(log n) protocols incur less maintenance traffic on large or high-churn networks but require more lookup hops in small networks. Accordion is a new routing protocol that does not force the developer to make this choice: Accordion adjusts itself to provide the best performance across a range of network sizes and churn rates while staying within a bounded bandwidth budget. The key challenges in the design of Accordion are the algorithms that choose the routing table's size and content. Each Accordion node learns of new neighbors opportunistically, in a way that causes the density of its neighbors to be inversely proportional to their distance in ID space from the node. This distribution allows Accordion to vary the table size along a continuum while still guaranteeing at most O(log n) lookup hops. The user-specified bandwidth budget controls the rate at which a node learns about new neighbors. Each node limits its routing table size by evicting neighbors that it judges likely to have failed. High churn (i.e., short node lifetimes) leads to a high eviction rate. The equilibrium between the learning and eviction processes determines the table size. Simulations show that Accordion maintains an efficient lookup latency versus bandwidth tradeoff over a wider range of operating conditions than existing DHTs.
Analysis and Design of Passive Polyphase Filters Passive RC polyphase filters (PPFs) are analyzed in detail in this paper. First, a method to calculate the output signals of an n-stage PPF is presented. As a result, all relevant properties of PPFs, such as amplitude and phase imbalance and loss, are calculated. The rules for optimal pole frequency planning to maximize the image-reject ratio provided by a PPF are given. The loss of PPF is divided into two factors, namely the intrinsic loss caused by the PPF itself and the loss caused by termination impedances. Termination impedances known a priori can be used to derive such component values, which minimize the overall loss. The effect of parasitic capacitance and component value deviation are analyzed and discussed. The method of feeding the input signal to the first PPF stage affects the mechanisms of the whole PPF. As a result, two slightly different PPF topologies can be distinguished, and they are separately analyzed and compared throughout this paper. A design example is given to demonstrate the developed design procedure.
High Frequency Buck Converter Design Using Time-Based Control Techniques Time-based control techniques for the design of high switching frequency buck converters are presented. Using time as the processing variable, the proposed controller operates with CMOS-level digital-like signals but without adding any quantization error. A ring oscillator is used as an integrator in place of conventional opamp-RC or G m-C integrators while a delay line is used to perform voltage to time conversion and to sum time signals. A simple flip-flop generates pulse-width modulated signal from the time-based output of the controller. Hence time-based control eliminates the need for wide bandwidth error amplifier, pulse-width modulator (PWM) in analog controllers or high resolution analog-to-digital converter (ADC) and digital PWM in digital controllers. As a result, it can be implemented in small area and with minimal power. Fabricated in a 180 nm CMOS process, the prototype buck converter occupies an active area of 0.24 mm2, of which the controller occupies only 0.0375 mm2. It operates over a wide range of switching frequencies (10-25 MHz) and regulates output to any desired voltage in the range of 0.6 V to 1.5 V with 1.8 V input voltage. With a 500 mA step in the load current, the settling time is less than 3.5 μs and the measured reference tracking bandwidth is about 1 MHz. Better than 94% peak efficiency is achieved while consuming a quiescent current of only 2 μA/MHz.
Neuropixels Data-Acquisition System: A Scalable Platform for Parallel Recording of 10,000+ Electrophysiological Signals. Although CMOS fabrication has enabled a quick evolution in the design of high-density neural probes and neural-recording chips, the scaling and miniaturization of the complete data-acquisition systems has happened at a slower pace. This is mainly due to the complexity and the many requirements that change depending on the specific experimental settings. In essence, the fundamental challenge of a n...
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
Distributed Dynamic Weighted Average Consensus for Disturbed Multiagent Systems in Fixed Time This article addresses the dynamic weighted average consensus (DWAC) problem of disturbed first-order multiagent systems, where a group of agents work cooperatively to track the weighted average of multiple time-varying signals. A class of distributed continuous-time algorithms are proposed to achieve DWAC within a fixed settling time, regardless of the initial conditions of the agents' states. Our first algorithm achieves accurate tracking of the desired trajectory by taking advantage of the variable structure property of sign functions. The second algorithm achieves uniformly bounded tracking with adjustable tracking error bounds by applying the continuous control law. Lyapunov method based stability analysis shows that both the algorithms achieve the fixed-time tracking control even when the single-integrator agent dynamics are affected by bounded disturbances. Moreover, the relationships between controller parameters and tracking performance are derived and the upper bounds of settling time are estimated. Finally, the proposed algorithms are applied to solve the distributed time-varying quadratic optimization problem, and the simulation results confirm the effectiveness of the proposed distributed fixed-time DWAC algorithms.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Scalable Digital Neuromorphic Architecture for Large-Scale Biophysically Meaningful Neural Network With Multi-Compartment Neurons. Multicompartment emulation is an essential step to enhance the biological realism of neuromorphic systems and to further understand the computational power of neurons. In this paper, we present a hardware efficient, scalable, and real-time computing strategy for the implementation of large-scale biologically meaningful neural networks with one million multi-compartment neurons (CMNs). The hardware platform uses four Altera Stratix III field-programmable gate arrays, and both the cellular and the network levels are considered, which provides an efficient implementation of a large-scale spiking neural network with biophysically plausible dynamics. At the cellular level, a cost-efficient multi-CMN model is presented, which can reproduce the detailed neuronal dynamics with representative neuronal morphology. A set of efficient neuromorphic techniques for single-CMN implementation are presented with all the hardware cost of memory and multiplier resources removed and with hardware performance of computational speed enhanced by 56.59% in comparison with the classical digital implementation method. At the network level, a scalable network-on-chip (NoC) architecture is proposed with a novel routing algorithm to enhance the NoC performance including throughput and computational latency, leading to higher computational efficiency and capability in comparison with state-of-the-art projects. The experimental results demonstrate that the proposed work can provide an efficient model and architecture for large-scale biologically meaningful networks, while the hardware synthesis results demonstrate low area utilization and high computational speed that supports the scalability of the approach.
From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render驴or synthesize驴images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses $\times$ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
Energy efficient parallel neuromorphic architectures with approximate arithmetic on FPGA. In this paper, we present the parallel neuromorphic processor architectures for spiking neural networks on FPGA. The proposed architectures address several critical issues pertaining to efficient parallelization of the update of membrane potentials, on-chip storage of synaptic weights and integration of approximate arithmetic units. The trade-offs between throughput, hardware cost and power overheads for different configurations are thoroughly investigated. Notably, for the application of handwritten digit recognition, a promising training speedup of 13.5x and a recognition speedup of 25.8x are achieved by a parallel implementation whose degree of parallelism is 32. In spite of the 120MHz operating frequency, the 32-way parallel hardware design demonstrates a 59.4x training speedup over the single-thread software program running on a 2.2GHz general purpose CPU. Equally importantly, by leveraging the built-in resilience of the neuromorphic architecture we demonstrate the energy benefit resulted from the use of approximate arithmetic computation. Up to 20% improvement in energy consumption is achieved by integrating approximate multipliers into the system while maintaining almost the same level of recognition rate achieved using standard multipliers. To the best of our knowledge, it is the first time that the approximate computing and parallel processing are applied to FPGA based spiking neural networks. The influence of the parallel processing on the benefits of approximate computing is also discussed in detail.
A Low-Cost High-Speed Neuromorphic Hardware Based on Spiking Neural Network Neuromorphic is a relatively new interdisciplinary research topic, which employs various fields of science and technology, such as electronic, computer, and biology. Neuromorphic systems consist of software/hardware systems, which are utilized to implement the neural networks based on human brain functionalities. The goal of neuromorphic systems is to mimic the biologically inspired concepts of th...
Efficient Design of Spiking Neural Network With STDP Learning Based on Fast CORDIC In emerging Spiking Neural Network (SNN) based neuromorphic hardware design, energy efficiency and on-line learning are attractive advantages mainly contributed by bio-inspired local learning with nonlinear dynamics and at the cost of associated hardware complexity. This paper presents a novel SNN design employing fast COordinate Rotation DIgital Computer (CORDIC) algorithm to achieve fast spike t...
Application of Deep Compression Technique in Spiking Neural Network Chip. In this paper, a reconfigurable and scalable spiking neural network processor, containing 192 neurons and 6144 synapses, is developed. By using deep compression technique in spiking neural network chip, the amount of physical synapses can be reduced to 1/16 of that needed in the original network, while the accuracy is maintained. This compression technique can greatly reduce the number of SRAMs inside the chip as well as the power consumption of the chip. This design achieves throughput per unit area of 1.1 GSOP/( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\text{s}\!\cdot\!\text{mm}^2$</tex-math></inline-formula> ) at 1.2 V, and energy consumed per SOP of 35 pJ. A 2-layer fully-connected spiking neural network is mapped to the chip, and thus the chip is able to realize handwritten digit recognition on MNIST with an accuracy of 91.2%.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Language-based information-flow security Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
Bundled execution of recurring traces for energy-efficient general purpose processing Technology scaling has delivered on its promises of increasing device density on a single chip. However, the voltage scaling trend has failed to keep up, introducing tight power constraints on manufactured parts. In such a scenario, there is a need to incorporate energy-efficient processing resources that can enable more computation within the same power budget. Energy efficiency solutions in the past have typically relied on application specific hardware and accelerators. Unfortunately, these approaches do not extend to general purpose applications due to their irregular and diverse code base. Towards this end, we propose BERET, an energy-efficient co-processor that can be configured to benefit a wide range of applications. Our approach identifies recurring instruction sequences as phases of "temporal regularity" in a program's execution, and maps suitable ones to the BERET hardware, a three-stage pipeline with a bundled execution model. This judicious off-loading of program execution to a reduced-complexity hardware demonstrates significant savings on instruction fetch, decode and register file accesses energy. On average, BERET reduces energy consumption by a factor of 3-4X for the program regions selected across a range of general-purpose and media applications. The average energy savings for the entire application run was 35% over a single-issue in-order processor.
A dynamically reconfigurable system-on-a-chip architecture for future mobile digital signal processing The evolving of current and future broadband access techniques into the wireless domain introduces new and flexible network architectures with difficult and interesting challenges. The system designers are faced with a challenging set of problems that stem from access mechanisms, energy conservation, error rate, transmission speed characteristics of the wireless links and mobility aspects. This paper presents first the major challenges in realizing flexible microelectronic system solutions for digital baseband signal processing in future mobile communication applications. Based thereupon, the architecture design of flexible system-on-a-chip solutions is discussed. The focus of the paper is the introduction of a new parallel and dynamically reconfigurable hardware architecture tailored to this application area. Its performance issues and potential are discussed by the implementation of a flexible and computation-intensive component of future mobile terminals.
NDC: Analyzing the impact of 3D-stacked memory+logic devices on MapReduce workloads While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Optimized Virtual Model Reference Control for Ride and Handling Performance-Oriented Semiactive Suspension Systems This paper proposes an optimized virtual model reference (OVMR) control synthesis method for semiactive suspension control based on ride and vehicle handling characteristics. First, we present the semiactive Macpherson suspension system as an H∞ robust output feedback-oriented control model. Then, by using the combination of a set of linear matrix inequalities (LMIs) and genetic algorithm (GA), the desired internal states for the tracking control problem of the semiactive suspension can be obtained via an OVMR. To achieve the H∞ performance of ride comfort and vehicle handling against the influence of parameter uncertainties and external disturbances of the system, a robust adaptive controller is designed so that the controlled system can track the desired states generated from OVMR. The tracking control can be converted into a stabilization problem with asymptotic convergence in the sense of Lyapunov stability theorem. To validate the effectiveness of the proposed approach, the cosimulation technique is employed to bridge the gap between the mathematically well-defined system model and the optimization quality of control. It can be confirmed that the designed control system can achieve performance-effective suspension control through the confident software-in-the-loop (SITL) simulation.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Context-Sensitive Fencing: Securing Speculative Execution via Microcode Customization This paper describes context-sensitive fencing (CSF), a microcode-level defense against multiple variants of Spectre. CSF leverages the ability to dynamically alter the decoding of the instruction stream, to seamlessly inject new micro-ops, including fences, only when dynamic conditions indicate they are needed. This enables the processor to protect against the attack, but with minimal impact on the efficacy of key performance features such as speculative execution. This research also examines several alternative fence implementations, and introduces three new types of fences which allow most dynamic reorderings of loads and stores, but in a way that prevents speculative accesses from changing visible cache state. These optimizations reduce the performance overhead of the defense mechanism, compared to state-of-the-art software-based fencing mechanisms by a factor of six.
The equational theory of pomsets Pomsets have been introduced as a model of concurrency. Since a pomset is a string in which the total order has been relaxed to be a partial order, in this paper we view them as a generalization of strings, and investigate their algebraic properties. In particular, we investigate the axiomatic properties of pomsets, sets of pomsets and ideals of pomsets, under such operations as concatenation, parallel composition, union and their associated closure operations. We find that the equational theory of sets, pomsets under concatenation, parallel composition and union is finitely axiomatizable, whereas the theory of languages under the analogous operations is not. A similar result is obtained for ideals of pomsets, which incorporate the notion of subsumption which is also known as augmentation. Finally, we show that the addition of any closure operation (parallel or serial) leads to nonfinite axiomatizability of the resulting equational theory.
Codejail: Application-Transparent Isolation of Libraries with Tight Program Interactions.
Conditional Speculation: An Effective Approach to Safeguard Out-of-Order Execution Against Spectre Attacks Speculative execution side-channel vulnerabilities such as Spectre reveal that conventional architecture designs lack security consideration. This paper proposes a software transparent defense mechanism, named as Conditional Speculation, against Spectre vulnerabilities found on traditional out-of-order microprocessors. It introduces the concept of security dependence to mark speculative memory instructions which could leak information with potential security risk. More specifically, security-dependent instructions are detected and marked with suspect speculation flags in the Issue Queue. All the instructions can be speculatively issued for execution in accordance with the classic out-of-order pipeline. For those instructions with suspect speculation flags, they are considered as safe instructions if their speculative execution will not refill new cache lines with unauthorized privilege data. Otherwise, they are considered as unsafe instructions and thus not allowed to execute speculatively. To reduce the performance impact from not executing unsafe instructions speculatively, we investigate two filtering mechanisms, Cachehit based Hazard Filter and Trusted Page Buffer based Hazard Filter to filter out false security hazards. Our design philosophy is to speculatively execute safe instructions to maintain the performance benefits of out-of-order execution while blocking the speculative execution of unsafe instructions for security consideration. We evaluate Conditional Speculation in terms of performance, security and area. The experimental results show that the hardware overhead is marginal and the performance overhead is minimal.
Speculative Probing: Hacking Blind in the Spectre Era To defeat ASLR or more advanced fine-grained and leakage-resistant code randomization schemes, modern software exploits rely on information disclosure to locate gadgets inside the victim's code. In the absence of such info-leak vulnerabilities, attackers can still hack blind and derandomize the address space by repeatedly probing the victim's memory while observing crash side effects, but doing so is only feasible for crash-resistant programs. However, high-value targets such as the Linux kernel are not crash-resistant. Moreover, the anomalously large number of crashes is often easily detectable. In this paper, we show that the Spectre era enables an attacker armed with a single memory corruption vulnerability to hack blind without triggering any crashes. Using speculative execution for crash suppression allows the elevation of basic memory write vulnerabilities into powerful speculative probing primitives that leak through microarchitectural side effects. Such primitives can repeatedly probe victim memory and break strong randomization schemes without crashes and bypass all deployed mitigations against Spectre-like attacks. The key idea behind speculative probing is to break Spectre mitigations using memory corruption and resurrect Spectre-style disclosure primitives to mount practical blind software exploits. To showcase speculative probing, we target the Linux kernel, a crash-sensitive victim that has so far been out of reach of blind attacks, mount end-to-end exploits that compromise the system with just-in-time code reuse and data-only attacks from a single memory write vulnerability, and bypass strong Spectre and strong randomization defenses. Our results show that it is crucial to consider synergies between different (Spectre vs. code reuse) threat models to fully comprehend the attack surface of modern systems.
Sparc T4: A Dynamically Threaded Server-on-a-Chip The Sparc T4 is the next generation of Oracle's multicore, multithreaded 64-bit Sparc server processor. It delivers significant performance improvements over its predecessor, the Sparc T3 processor. The authors describe Sparc T4's key features and detail the microarchitecture of the dynamically threaded S3 processor core, which is implemented on Sparc T4.
Abstract Interpretation under Speculative Execution Analyzing the behavior of a program running on a processor that supports speculative execution is crucial for applications such as execution time estimation and side channel detection. Unfortunately, existing static analysis techniques based on abstract interpretation do not model speculative execution since they focus on functional properties of a program while speculative execution does not change the functionality. To fill the gap, we propose a method to make abstract interpretation sound under speculative execution. There are two contributions. First, we introduce the notion of virtual control flow to augment instructions that may be speculatively executed and thus affect subsequent instructions. Second, to make the analysis efficient, we propose optimizations to handle merges and loops and to safely bound the speculative execution depth. We have implemented and evaluated the proposed method in a static cache analysis for execution time estimation and side channel detection. Our experiments show that the new method, while guaranteed to be sound under speculative execution, outperforms state-of-the-art abstract interpretation techniques that may be unsound.
Another Flip in the Wall of Rowhammer Defenses The Rowhammer bug allows unauthorized modification of bits in DRAM cells from unprivileged software, enabling powerful privilege-escalation attacks. Sophisticated Rowhammer countermeasures have been presented, aiming at mitigating the Rowhammer bug or its exploitation. However, the state of the art provides insufficient insight on the completeness of these defenses. In this paper, we present novel Rowhammer attack and exploitation primitives, showing that even a combination of all defenses is ineffective. Our new attack technique, one-location hammering, breaks previous assumptions on requirements for triggering the Rowhammer bug, i.e., we do not hammer multiple DRAM rows but only keep one DRAM row constantly open. Our new exploitation technique, opcode flipping, bypasses recent isolation mechanisms by flipping bits in a predictable and targeted way in userspace binaries. We replace conspicuous and memory-exhausting spraying and grooming techniques with a novel reliable technique called memory waylaying. Memory waylaying exploits system-level optimizations and a side channel to coax the operating system into placing target pages at attacker-chosen physical locations. Finally, we abuse Intel SGX to hide the attack entirely from the user and the operating system, making any inspection or detection of the attack infeasible. Our Rowhammer enclave can be used for coordinated denial-of-service attacks in the cloud and for privilege escalation on personal computers. We demonstrate that our attacks evade all previously proposed countermeasures for commodity systems.
Page placement algorithms for large real-indexed caches When a computer system supports both paged virtual memory and large real-indexed caches, cache performance depends in part on the main memory page placement. To date, most operating systems place pages by selecting an arbitrary page frame from a pool of page frames that have been made available by the page replacement algorithm. We give a simple model that shows that this naive (arbitrary) page placement leads to up to 30% unnecessary cache conflicts. We develop several page placement algorithms, called careful-mapping algorithms, that try to select a page frame (from the pool of available page frames) that is likely to reduce cache contention. Using trace-driven simulation, we find that careful mapping results in 10–20% fewer (dynamic) cache misses than naive mapping (for a direct-mapped real-indexed multimegabyte cache). Thus, our results suggest that careful mapping by the operating system can get about half the cache miss reduction that a cache size (or associativity) doubling can.
Understanding Availability This paper addresses a simple, yet fundamental question in the design of peer-to-peer systems: What does it mean when we say "availability" and how does this understand- ing impact the engineering of practical systems? We ar- gue that existing measurements and models do not capture the complex time-varying nature of availability in today's peer-to-peer environments. Further, we show that unfore- seen methodological shortcomings have dramatically biased previous analyses of this phenomenon. As the basis of our study, we empirically characterize the availability of a large peer-to-peer system over a period of 7 days, analyze the de- pendence of the underlying availability distributions, mea- sure host turnover in the system, and discuss how these re- sults may affect the design of high-availability peer-to-peer services.
Reducing MOSFET 1/f noise and power consumption by switched biasing Switched biasing is proposed as a technique for reducing the 1/f noise in MOSFET&#39;s. Conventional techniques, such as chopping or correlated double sampling, reduce the effect of 1/f noise in electronic circuits, whereas the switched biasing technique reduces the 1/f noise itself. Whereas noise reduction techniques generally lead to more power consumption, switched biasing can reduce the power cons...
Efficient routing in carrier-based mobile networks The past years have seen an intense research effort directed at study of delay/disruption tolerant networks and related concepts (intermittently connected networks, opportunistic mobility networks). As a fundamental primitive, routing in such networks has been one of the research foci. While multiple network models have been proposed and routing in them investigated, most of the published results are of heuristic nature with experimental validation; analytical results are scarce and apply mostly to networks whose structure follows deterministic schedule. In this paper, we propose a simple model of opportunistic mobility network based on oblivious carriers, and investigate the routing problem in such networks. We present an optimal online routing algorithm and compare it with a simple shortest-path inspired routing and optimal offline routing. In doing so, we identify the key parameters (the minimum non-zero probability of meeting among the carrier pairs, and the number of carriers a given carrier comes into contact) driving the separation among these algorithms.
Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition Deep-learning neural networks such as convolutional neural network (CNN) have shown great potential as a solution for difficult vision problems, such as object recognition. Spiking neural networks (SNN)-based architectures have shown great potential as a solution for realizing ultra-low power consumption using spike-based neuromorphic hardware. This work describes a novel approach for converting a deep CNN into a SNN that enables mapping CNN to spike-based hardware architectures. Our approach first tailors the CNN architecture to fit the requirements of SNN, then trains the tailored CNN in the same way as one would with CNN, and finally applies the learned network weights to an SNN architecture derived from the tailored CNN. We evaluate the resulting SNN on publicly available Defense Advanced Research Projects Agency (DARPA) Neovision2 Tower and CIFAR-10 datasets and show similar object recognition accuracy as the original CNN. Our SNN implementation is amenable to direct mapping to spike-based neuromorphic hardware, such as the ones being developed under the DARPA SyNAPSE program. Our hardware mapping analysis suggests that SNN implementation on such spike-based hardware is two orders of magnitude more energy-efficient than the original CNN implementation on off-the-shelf FPGA-based hardware.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.037092
0.033333
0.033333
0.033333
0.033333
0.021667
0.012963
0.00375
0.000311
0
0
0
0
0
No Need to Hide: Protecting Safe Regions on Commodity Hardware. As modern 64-bit x86 processors no longer support the segmentation capabilities of their 32-bit predecessors, most research projects assume that strong in-process memory isolation is no longer an affordable option. Instead of strong, deterministic isolation, new defense systems therefore rely on the probabilistic pseudo-isolation provided by randomization to \"hide\" sensitive (or safe) regions. However, recent attacks have shown that such protection is insufficient; attackers can leak these safe regions in a variety of ways. In this paper, we revisit isolation for x86-64 and argue that hardware features enabling efficient deterministic isolation do exist. We first present a comprehensive study on commodity hardware features that can be repurposed to isolate safe regions in the same address space (e.g., Intel MPX and MPK). We then introduce MemSentry, a framework to harden modern defense systems with commodity hardware features instead of information hiding. Our results show that some hardware features are more effective than others in hardening such defenses in each scenario and that features originally conceived for other purposes (e.g., Intel MPX for bounds checking) are surprisingly efficient at isolating safe regions compared to their software equivalent (i.e., SFI).
Thwarting Memory Disclosure with Efficient Hypervisor-enforced Intra-domain Isolation Exploiting memory disclosure vulnerabilities like the HeartBleed bug may cause arbitrary reading of a victim's memory, leading to leakage of critical secrets such as crypto keys, personal identity and financial information. While isolating code that manipulates critical secrets into an isolated execution environment is a promising countermeasure, existing approaches are either too coarse-grained to prevent intra-domain attacks, or require excessive intervention from low-level software (e.g., hypervisor or OS), or both. Further, few of them are applicable to large-scale software with millions of lines of code. This paper describes a new approach, namely SeCage, which retrofits commodity hardware virtualization extensions to support efficient isolation of sensitive code manipulating critical secrets from the remaining code. SeCage is designed to work under a strong adversary model where a victim application or even the OS may be controlled by the adversary, while supporting large-scale software with small deployment cost. SeCage combines static and dynamic analysis to decompose monolithic software into several compart- ments, each of which may contain different secrets and their corresponding code. Following the idea of separating control and data plane, SeCage retrofits the VMFUNC mechanism and nested paging in Intel processors to transparently provide different memory views for different compartments, while allowing low-cost and transparent invocation across domains without hypervisor intervention. We have implemented SeCage in KVM on a commodity Intel machine. To demonstrate the effectiveness of SeCage, we deploy it to the Nginx and OpenSSH server with the OpenSSL library as well as CryptoLoop with small efforts. Security evaluation shows that SeCage can prevent the disclosure of private keys from HeartBleed attacks and memory scanning from rootkits. The evaluation shows that SeCage only incurs small performance and space overhead.
Data Space Randomization Over the past several years, US-CERT advisories, as well as most critical updates from software vendors, have been due to memory corruption vulnerabilities such as buffer overflows, heap overflows, etc. Several techniques have been developed to defend against the exploitation of these vulnerabilities, with the most promising defenses being based on randomization. Two randomization techniques have been explored so far: address space randomization (ASR) that randomizes the location of objects in virtual memory, and instruction set randomization (ISR) that randomizes the representation of code. We explore a third form of randomization called data space randomization (DSR) that randomizes the representation of data stored in program memory. Unlike ISR, DSR is effective against non-control data attacks as well as code injection attacks. Unlike ASR, it can protect against corruption of non-pointer data as well as pointer-valued data. Moreover, DSR provides a much higher range of randomization (typically 232 for 32-bit data) as compared to ASR. Other interesting aspects of DSR include (a) it does not share a weakness common to randomization-based defenses, namely, susceptibility to information leakage attacks, and (b) it is capable of detecting some exploits that are missed by full bounds-checking techniques, e.g., some of the overflows from one field of a structure to the next field. Our implementation results show that with appropriate design choices, DSR can achieve a performance overhead in the range of 5% to 30% for a range of programs.
Portable Software Fault Isolation We present a new technique for architecture portable software fault isolation (SFI), together with a prototype implementation in the Coq proof assistant. Unlike traditional SFI, which relies on analysis of assembly-level programs, we analyze and rewrite programs in a compiler intermediate language, the Cminor language of the Comp Cert C compiler. But like traditional SFI, the compiler remains outside of the trusted computing base. By composing our program transformer with the verified back-end of Comp Cert and leveraging Comp Cert's formally proved preservation of the behavior of safe programs, we can obtain binary modules that satisfy the SFI memory safety policy for any of Comp Cert's supported architectures (currently: Power PC, ARM, and x86-32). This allows the same SFI analysis to be used across multiple architectures, greatly simplifying the most difficult part of deploying trustworthy SFI systems.
EffectiveSan: type and memory error detection using dynamically typed C/C++ Low-level programming languages with weak/static type systems, such as C and C++, are vulnerable to errors relating to the misuse of memory at runtime, such as (sub-)object bounds overflows, (re)use-after-free, and type confusion. Such errors account for many security and other undefined behavior bugs for programs written in these languages. In this paper, we introduce the notion of dynamically typed C/C++, which aims to detect such errors by dynamically checking the "effective type" of each object before use at runtime. We also present an implementation of dynamically typed C/C++ in the form of the Effective Type Sanitizer (EffectiveSan). EffectiveSan enforces type and memory safety using a combination of low-fat pointers, type meta data and type/bounds check instrumentation. We evaluate EffectiveSan against the SPEC2006 benchmark suite and the Firefox web browser, and detect several new type and memory errors. We also show that EffectiveSan achieves high compatibility and reasonable overheads for the given error coverage. Finally, we highlight that EffectiveSan is one of only a few tools that can detect sub-object bounds errors, and uses a novel approach (dynamic type checking) to do so.
Ghostbusting - mitigating spectre with intraprocess memory isolation.
Control-flow integrity principles, implementations, and applications Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, control-flow integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: It is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
Cellular Logic-in-Memory Arrays As a direct consequence of large-scale integration, many advantages in the design, fabrication, testing, and use of digital circuitry can be achieved if the circuits can be arranged in a two-dimensional iterative, or cellular, array of identical elementary networks, or cells. When a small amount of storage is included in each cell, the same array may be regarded either as a logically enhanced memory array, or as a logic array whose elementary gates and connections can be "programmed" to realize a desired logical behavior.
Evolving Distributed Algorithms With Genetic Programming In this paper, we evaluate the applicability of genetic programming (GP) for the evolution of distributed algorithms. We carry out a large-scale experimental study in which we tackle three well-known problems from distributed computing with six different program representations. For this purpose, we first define a simulation environment in which phenomena such as asynchronous computation at changing speed and messages taking over each other, i.e., out-of-order message delivery, occur with high probability. Second, we define extensions and adaptations of established GP approaches (such as tree-based and linear GP) in order to make them suitable for representing distributed algorithms. Third, we introduce novel rule-based GP methods designed especially with the characteristic difficulties of evolving algorithms (such as epistasis) in mind. Based on our extensive experimental study of these approaches, we conclude that GP is indeed a viable method for evolving non-trivial, deterministic, non-approximative distributed algorithms. Furthermore, one of the two rule-based approaches is shown to exhibit superior performance in most of the tasks and thus can be considered as an interesting idea also for other problem domains.
Approximately bisimilar symbolic models for nonlinear control systems Control systems are usually modeled by differential equations describing how physical phenomena can be influenced by certain control parameters or inputs. Although these models are very powerful when dealing with physical phenomena, they are less suited to describe software and hardware interfacing with the physical world. For this reason there is a growing interest in describing control systems through symbolic models that are abstract descriptions of the continuous dynamics, where each ''symbol'' corresponds to an ''aggregate'' of states in the continuous model. Since these symbolic models are of the same nature of the models used in computer science to describe software and hardware, they provide a unified language to study problems of control in which software and hardware interact with the physical world. Furthermore, the use of symbolic models enables one to leverage techniques from supervisory control and algorithms from game theory for controller synthesis purposes. In this paper we show that every incrementally globally asymptotically stable nonlinear control system is approximately equivalent (bisimilar) to a symbolic model. The approximation error is a design parameter in the construction of the symbolic model and can be rendered as small as desired. Furthermore, if the state space of the control system is bounded, the obtained symbolic model is finite. For digital control systems, and under the stronger assumption of incremental input-to-state stability, symbolic models can be constructed through a suitable quantization of the inputs.
Modeling of software radio aspects by mapping of SDL and CORBA With the evolution of 3rd generation mobile communications standardization, the software radio concept has the potential to offer a pragmatic solution - a software implementation that allows the mobile terminal to adapt dynamically to its radio environment. The mapping of SDL and CORBA mechanisms is introduced, in order to provide a generic platform for the implementation of future mobile services, supporting standardized interfaces and manufacturer platform independent object and service functionality description. For the functional entity diagram model, it is proposed that the functional entities be designed as objects, the functional entities group as 'open' object oriented SDL platforms, and the interfaces between them as CORBA IDLs, communicating via the ORB in a generic implementation and location independent way. The functional entity groups are proposed to be modeled as SDL block types, while the functional entities and sub-entities as SDL process and service types. The objects interact with each other like client or server objects requesting or receiving services from other objects. Every object has a CORBA IDL interface, which allows every component to be distributed in an optimum way by providing a standardized infrastructure, ensuring interoperability, flexibility, reusability, transparency and management capabilities.
Quadrature Bandpass Sampling Rules for Single- and Multiband Communications and Satellite Navigation Receivers In this paper, we examine how existing rules for bandpass sampling rates can be applied to quadrature bandpass sampling. We find that there are significantly more allowable sampling rates and that the minimum rate can be reduced.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1.11
0.11
0.1
0.1
0.1
0.05
0.0075
0
0
0
0
0
0
0
A 2.4 GHz 4 mW Integer-N Inductorless RF Synthesizer. The high phase noise of ring oscillators has generally discouraged their use in RF synthesis. This paper introduces an integer-N synthesizer that employs a type-I loop to achieve a wide bandwidth, allowing the use of ring oscillators, and a master-slave sampling loop filter along with harmonic traps to suppress spurs. A 2.4 GHz prototype fabricated in 45 nm digital CMOS technology provides a loop ...
A 2.4-GHz 6.4-mW fractional-N inductorless RF synthesizer. A cascaded synthesizer architecture incorporates a digital delay-line-based filter and an analog noise trap to suppress the quantization noise of the ΣA modulator. Operating with a reference frequency of 22.6 MHz, the synthesizer achieves a bandwidth of 10 MHz in the first loop and 12 MHz in the second, heavily suppressing the phase noise of its constituent ring oscillators. Realized in 45-nm digi...
A Discrete-Time Model for the Design of Type-II PLLs With Passive Sampled Loop Filters Type-II charge-pump (CP) phase-locked loop (PLLs) are used extensively in electronic systems for frequency synthesis. Recently, a passive sampled loop filter (SLF) has been shown to offer major benefits over the conventional continuous-time loop filter traditionally used in such PLLs. These benefits include greatly enhanced reference spur suppression, elimination of CP pulse-position modulation nonlinearity, and, in the case of phase noise cancelling fractional-N PLLs, improved phase noise cancellation. The main disadvantage of the SLF to date has been the lack of a linear time-invariant (LTI) model with which to perform the system-level design of SLF-based PLLs. Without such a model, designers are forced to rely on trial and error iteration supported by lengthy transient simulations. This paper presents an accurate LTI model of SLF-based type-II PLLs that eliminates this disadvantage.
An Inductorless 20-Gb/s CDR With High Jitter Tolerance. A full-rate clock and data recovery loop employs a three-stage ring voltage-controlled oscillator, a master-slave passive sampler as both a phase detector and a filter, and a new flip-flop to achieve a loop bandwidth of 170 MHz. Implemented in 45-nm CMOS technology, the circuit occupies an area of 14 μm × 26 μm and exhibits a jitter tolerance of 2 TiI at 5 MHz and a recovered clock jitter of 459 f...
Clock Multiplication Techniques Using Digital Multiplying Delay-Locked Loops A highly-digital clock multiplication architecture that achieves excellent jitter and mitigates supply noise is presented. The proposed architecture utilizes a calibration-free digital multiplying delay-locked loop (MDLL) to decouple the tradeoff between time-to-digital converter (TDC) resolution and oscillator phase noise in digital phase-locked loops (PLLs). Both reduction in jitter accumulation down to sub-picosecond levels and improved supply noise rejection over conventional PLL architectures is demonstrated with low power consumption. A digital PLL that employs a 1-bit TDC and a low power regulator that seeks to improve supply noise immunity without increasing loop delay is presented and used to compare with the proposed MDLL. The prototype MDLL and DPLL chips are fabricated in a 0.13 μm CMOS technology and operate from a nominal 1.1 V supply. The proposed MDLL achieves an integrated jitter of 400 fs rms at 1.5 GHz output frequency from a 375 MHz reference clock, while consuming 890 μ W. The worst-case supply noise sensitivity of the MDLL is 20 fspp/mVpp which translates to a jitter degradation of 3.8 ps in the presence of 200 mV supply noise. The proposed clock multipliers occupy active die areas of 0.25 mm2 and 0.2 mm2 for the MDLL and DPLL, respectively.
Jitter and phase noise in ring oscillators A companion analysis of clock jitter and phase noise of single-ended and differential ring oscillators is presented. The impulse sensitivity functions are used to derive expressions for the jitter and phase noise of ring oscillators. The effect of the number of stages, power dissipation, frequency of oscillation, and short- channel effects on the jitter and phase noise of ring oscillators is analyzed. Jitter and phase noise due to substrate and supply noise is discussed, and the effect of symmetry on the upconversion of 1/ noise is demonstrated. Several new design insights are given for low jitter/phase-noise design. Good agreement between theory and measurements is observed. UE to their integrated nature, ring oscillators have be- come an essential building block in many digital and communication systems. They are used as voltage-controlled oscillators (VCO's) in applications such as clock recovery circuits for serial data communications (1)-(4), disk-drive read channels (5), (6), on-chip clock distribution (7)-(10), and integrated frequency synthesizers (10), (11). Although they have not found many applications in radio frequency (RF), they can be used for some low-tier RF systems. Recently, there has been some work on modeling jitter and phase noise in ring oscillators. References (12) and (13) develop models for the clock jitter based on time-domain treatments for MOS and bipolar differential ring oscillators, respectively. Reference (14) proposes a frequency-domain approach to find the phase noise based on an linear time- invariant model for differential ring oscillators with a small number of stages. In this paper, we develop a parallel treatment of frequency- domain phase noise (15) and time-domain clock jitter for ring oscillators. We apply the phase-noise model presented in (16) to obtain general expressions for jitter and phase noise of the ring oscillators. The next section briefly reviews the phase-noise model presented in (16). In Section III, we apply the model to timing jitter and develop an expression for the timing jitter of oscilla- tors, while Section IV provides the derivation of a closed-form expression to calculate the rms value of the impulse sensitivity function (ISF). Section V introduces expressions for jitter and phase noise in single-ended and differential ring oscillators
Replica compensated linear regulators for supply-regulated phase-locked loops Supply-regulated phase-locked loops rely upon the VCO voltage regulator to maintain a low sensitivity to supply noise and hence low overall jitter. By analyzing regulator supply rejection, we show that in order to simultaneously meet the bandwidth and low dropout requirements, previous regulator implementations used in supply-regulated PLLs suffer from unfavorable tradeoffs between power supply rejection and power consumption. We therefore propose a compensation technique that places the regulator's amplifier in a local replica feedback loop, stabilizing the regulator by increasing the amplifier bandwidth while lowering its gain. Even though the forward gain of the amplifier is reduced, supply noise affects the replica output in addition to the actual output, and therefore the amplifier's gain to reject supply noise is effectively restored. Analysis shows that for reasonable mismatch between the replica and actual loads, regulator performance is uncompromised, and experimental results from a 90 nm SOI test chip confirm that with the same power consumption, the proposed regulator achieves at least 4 dB higher supply rejection than the previous regulator design. Furthermore, simulations show that if not for other supply rejection-limiting components in the PLL, the supply rejection improvement of the proposed regulator is greater than 15 dB.
A 10-Gb/s CMOS clock and data recovery circuit with a half-rate binary phase/frequency detector A 10-Gb/s phase-locked clock and data recovery circuit incorporates a multiphase LC oscillator and a half-rate phase/frequency detector with automatic data retiming. Fabricated in 0.18-μm CMOS technology in an area of 1.75×1.55 mm2, the circuit exhibits a capture range of 1.43 GHz, an rms jitter of 0.8 ps, a peak-to-peak jitter of 9.9 ps, and a bit error rate of 10-9 with a pseudorandom bit sequence of 223-1. The power dissipation excluding the output buffers is 91 mW from a 1.8-V supply.
Fully integrated wideband high-current rectifiers for inductively powered devices This paper describes the design and implementation of fully integrated rectifiers in BiCMOS and standard CMOS technologies for rectifying an externally generated RF carrier signal in inductively powered wireless devices, such as biomedical implants, radio-frequency identification (RFID) tags, and smartcards to generate an on-chip dc supply. Various full-wave rectifier topologies and low-power circuit design techniques are employed to decrease substrate leakage current and parasitic components, reduce the possibility of latch-up, and improve power transmission efficiency and high-frequency performance of the rectifier block. These circuits are used in wireless neural stimulating microsystems, fabricated in two processes: the University of Michigan's 3-μm 1M/2P N-epi BiCMOS, and the AMI 1.5-μm 2M/2P N-well standard CMOS. The rectifier areas are 0.12-0.48 mm2 in the above processes and they are capable of delivering >25mW from a receiver coil to the implant circuitry. The performance of these integrated rectifiers has been tested and compared, using carrier signals in 0.1-10-MHz range.
Compiler algorithms for synchronization Translating program loops into a parallel form is one of the most important transformations performed by concurrentizing compilers. This transformation often requires the insertion of synchronization instructions within the body of the concurrent loop. Several loop synchronization techniques are presented first. Compiler algorithms to generate synchronization instructions for singly-nested loops are then discussed. Finally, a technique for the elimination of redundant synchronization instructions is presented.
The Essence of P2P: A Reference Architecture for Overlay Networks The success of the P2P idea has created a huge diversity of approaches, among which overlay networks, for example, Gnutella, Kazaa, Chord, Pastry, Tapestry, P-Grid, or DKS, have received specific attention from both developers and researchers. A wide variety of algorithms, data structures, and architectures have been proposed. The terminologies and abstractions used, however, have become quite inconsistent since the P2P paradigm has attracted people from many different communities, e.g., networking, databases, distributed systems, graph theory, complexity theory, biology, etc. In this paper we propose a reference model for overlay networks which is capable of modeling different approaches in this domain in a generic manner. It is intended to allow researchers and users to assess the properties of concrete systems, to establish a common vocabulary for scientific discussion, to facilitate the qualitative comparison of the systems, and to serve as the basis for defining a standardized API to make overlay networks interoperable.
Architectural Evolution of Integrated M-Phase High-Q Bandpass Filters -phase bandpass filters (BPFs) are analyzed, and variations of the structure are proposed. For values of that are integer multiples of 4, the conventional -phase BPF structure is modified to take complex baseband impedances and frequency-translate their complex impedance response to the local oscillator frequency. Also, it is demonstrated how the -phase BPF can be modified to implement a high quality factor (Q) image-rejection BPF with quadrature RF inputs. In addition, we present high-Q BPFs whose center frequencies are equal to the sum or difference of the RF and IF (intermediate frequency) clocks. Such filters can be useful in heterodyne receiver architectures.
16.7 A 20V 8.4W 20MHz four-phase GaN DC-DC converter with fully on-chip dual-SR bootstrapped GaN FET driver achieving 4ns constant propagation delay and 1ns switching rise time Recently, the demand for miniaturized and fast transient response power delivery systems has been growing in high-voltage industrial electronics applications. Gallium Nitride (GaN) FETs showing a superior figure of merit (Rds, ON X Qg) in comparison with silicon FETs [1] can enable both high-frequency and high-efficiency operation in these applications, thus making power converters smaller, faster and more efficient. However, the lack of GaN-compatible high-speed gate drivers is a major impediment to fully take advantage of GaN FET-based power converters. Conventional high-voltage gate drivers usually exhibit propagation delay, tdelay, of up to several 10s of ns in the level shifter (LS), which becomes a critical problem as the switching frequency, fsw, reaches the 10MHz regime. Moreover, the switching slew rate (SR) of driving GaN FETs needs particular care in order to maintain efficient and reliable operation. Driving power GaN FETs with a fast SR results in large switching voltage spikes, risking breakdown of low-Vgs GaN devices, while slow SR leads to long switching rise time, tR, which degrades efficiency and limits fsw. In [2], large tdelay and long tR in the GaN FET driver limit its fsw to 1MHz. A design reported in [3] improves tR to 1.2ns, thereby enabling fsw up to 10MHz. However, the unregulated switching dead time, tDT, then becomes a major limitation to further reduction of tde!ay. This results in limited fsw and narrower range of VIN-VO conversion ratio. Interleaved multiphase topologies can be the most effective way to increase system fsw. However, each extra phase requires a capacitor for bootstrapped (BST) gate driving which incurs additional cost and complexity of the PCB design. Moreover, the requirements of fsw synchronization and balanced - urrent sharing for high fsw operation in multiphase implementation are challenging.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.054844
0.053125
0.05
0.05
0.017695
0.005061
0.000247
0.000001
0
0
0
0
0
0