Query Text
stringlengths
10
40.4k
Ranking 1
stringlengths
12
40.4k
Ranking 2
stringlengths
12
36.2k
Ranking 3
stringlengths
10
36.2k
Ranking 4
stringlengths
13
40.4k
Ranking 5
stringlengths
12
36.2k
Ranking 6
stringlengths
13
36.2k
Ranking 7
stringlengths
10
40.4k
Ranking 8
stringlengths
12
36.2k
Ranking 9
stringlengths
12
36.2k
Ranking 10
stringlengths
12
36.2k
Ranking 11
stringlengths
20
6.21k
Ranking 12
stringlengths
14
8.24k
Ranking 13
stringlengths
28
4.03k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.25
score_4
float64
0
0.25
score_5
float64
0
0.25
score_6
float64
0
0.25
score_7
float64
0
0.24
score_8
float64
0
0.2
score_9
float64
0
0.03
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
A 50-kW High-Frequency and High-Efficiency SiC Voltage Source Inverter for More Electric Aircraft. High power density is required for power converter in more electric aircraft due to the strict demands of volume and weight, which makes silicon carbide (SiC) extremely attractive for this application. In this paper, a prototype of 50-kW SiC two-level three-phase voltage source inverter is demonstrated with a gravimetric power density of 26 kW/kg (without inclusion of filter). A gate assisted circ...
Computer Modeling of Nickel-Iron Alloy in Power Electronics Applications. Rotational magnetizations of an Ni-Fe alloy are simulated using two different computer modeling approaches, physical and phenomenological. The first one is a model defined using a single hysteron operator based on the Stoner and Wohlfarth theory and the second one is a model based on a suitable system of neural networks. The models are identified and validated using experimental data, and, finally...
Robust Lightning Indirect Effect Protection in Avionic Diagnostics: Combining Inductive Blocking Devices With Metal Oxide Varistors. The combination of iron core inductors and metal oxide varistors has been modeled and experimentally characterized to propose a robust protection system for indirect lightning effects in avionic environment. In particular, a numerical procedure to design transient limiters with high current rating, low clamping voltage, and low degradation rate is presented and described in detail. The protection ...
A Comprehensive Design Approach to an EMI Filter for a 6-kW Three-Phase Boost Power Factor Correction Rectifier in Avionics Vehicular Systems A compact and efficient design for the electromagnetic interference (EMI) filter stage has become one of the most critical challenges in designing a high-density power converter, particularly for avionic applications. To maintain the regulatory standard requirements, EMI filter design needs to be precisely implemented. However, the attenuation characteristics of common-mode (CM) and differential-m...
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Fully integrated wideband high-current rectifiers for inductively powered devices This paper describes the design and implementation of fully integrated rectifiers in BiCMOS and standard CMOS technologies for rectifying an externally generated RF carrier signal in inductively powered wireless devices, such as biomedical implants, radio-frequency identification (RFID) tags, and smartcards to generate an on-chip dc supply. Various full-wave rectifier topologies and low-power circuit design techniques are employed to decrease substrate leakage current and parasitic components, reduce the possibility of latch-up, and improve power transmission efficiency and high-frequency performance of the rectifier block. These circuits are used in wireless neural stimulating microsystems, fabricated in two processes: the University of Michigan's 3-μm 1M/2P N-epi BiCMOS, and the AMI 1.5-μm 2M/2P N-well standard CMOS. The rectifier areas are 0.12-0.48 mm2 in the above processes and they are capable of delivering >25mW from a receiver coil to the implant circuitry. The performance of these integrated rectifiers has been tested and compared, using carrier signals in 0.1-10-MHz range.
Standards for XML and Web Services Security XML schemas convey the data syntax and semantics for various application domains, such as business-to-business transactions, medical records, and production status reports. However, these schemas seldom address security issues, which can lead to a worst-case scenario of systems and protocols with no security at all. At best, they confine security to transport level mechanisms such as secure sockets layer (SSL). On the other hand, the omission of security provisions from domain schemas opens the way for generic security specifications based on XML document and grammar extensions. These specifications are orthogonal to domain schemas but integrate with them to support a variety of security objectives, such as confidentiality, integrity, and access control. In 2002, several specifications progressed toward providing a comprehensive standards framework for secure XML-based applications. The paper shows some of the most important specifications, the issues they address, and their dependencies.
Random walks in peer-to-peer networks: algorithms and evaluation We quantify the effectiveness of random walks for searching and construction of unstructured peer-to-peer (P2P) networks. We have identified two cases where the use of random walks for searching achieves better results than flooding: (a) when the overlay topology is clustered, and (b) when a client re-issues the same query while its horizon does not change much. Related to the simulation of random walks is also the distributed computation of aggregates, such as averaging. For construction, we argue that an expander can be maintained dynamically with constant operations per addition. The key technical ingredient of our approach is a deep result of stochastic processes indicating that samples taken from consecutive steps of a random walk on an expander graph can achieve statistical properties similar to independent sampling. This property has been previously used in complexity theory for construction of pseudorandom number generators. We reveal another facet of this theory and translate savings in random bits to savings in processing overhead.
CoCo: coding-based covert timing channels for network flows In this paper, we propose CoCo, a novel framework for establishing covert timing channels. The CoCo covert channel modulates the covert message in the inter-packet delays of the network flows, while a coding algorithm is used to ensure the robustness of the covert message to different perturbations. The CoCo covert channel is adjustable: by adjusting certain parameters one can trade off different features of the covert channel, i.e., robustness, rate, and undetectability. By simulating the CoCo covert channel using different coding algorithms we show that CoCo improves the covert robustness as compared to the previous research, while being practically undetectable.
CORDIC-based computation of ArcCos and ArcSin CORDIC--based algorithms to compute cos^{-1}(t), sin^{-1}(t) and sqrt{1-t^{2}} are proposed. The implementation requires a standard CORDIC module plus a module to compute the direction of rotation, this being the same hardware required for the extended CORDIC vectoring, recently proposed by the authors. Although these functions can be obtained as a special case of this extended vectoring, the specific algorithm we propose here presents two significant improvements: (1) it achieves an angle granularity of 2^{-n} using the same datapath width as the standard CORDIC algorithm (about n bits, instead of about 2n which would be required using the extended vectoring), and (2) no repetitions of iterations are needed. The proposed algorithm is compatible with the extended vectoring and, in contrast with previous implementations, the number of iterations and the delay of each iteration are the same as for the conventional CORDIC algorithm.
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
0
Fixed-Time Synchronization of Complex Networks With Impulsive Effects via Nonchattering Control. Dealing with impulsive effects is one of the most challenging problems in the field of fixed-time control. In this paper, we solve this challenging problem by considering fixed-time synchronization of complex networks (CNs) with impulsive effects. By designing a new Lyapunov function and constructing comparison systems, a sufficient condition formulated by matrix inequalities is given to ensure that all the dynamical subsystems in the CNs are synchronized with an isolated system in a settling time, which is independent of the initial values of both the CNs and the isolated system. Then, by partitioning impulse interval and using the convex combination technique, sufficient conditions in terms of linear matrix inequalities are provided. Our synchronization criteria unify synchronizing and desynchronizing impulses. Compared with the existing controllers for fixed-time and finite-time techniques, the designed controller is continuous and does not include any sign function, and hence, the chattering phenomenon in most of the existing results is overcome. An optimal algorithm is proposed for the estimation of the settling time. Numerical examples are given to show the effectiveness of our new results.
Finite-Time and Fixed-Time Cluster Synchronization With or Without Pinning Control. In this paper, the finite-time and fixed-time cluster synchronization problem for complex networks with or without pinning control are discussed. Finite-time (or fixed-time) synchronization has been a hot topic in recent years, which means that the network can achieve synchronization in finite-time, and the settling time depends on the initial values for finite-time synchronization (or the settlin...
Finite/Fixed-Time Pinning Synchronization of Complex Networks With Stochastic Disturbances. This brief proposes a unified theoretical framework to investigate the finite/fixed-time synchronization of complex networks with stochastic disturbances. By designing a common pinning controller with different ranges of power parameters, both the goals of finite-time and fixed-time synchronization in probability for the network topology containing spanning trees can be achieved. Moveover, with the help of finite-time stochastic stability theory, two types of explicit expressions of finite/fixed (dependent/independent on the initial values) settling times are calculated as well. One numerical example is finally presented to demonstrate the effectiveness of the theoretical analysis.
Neural sliding-mode pinning control for output synchronization for uncertain general complex networks A novel approach for output synchronization for uncertain general complex networks with non-identical nodes is proposed. This goal can be solved applying to a small fraction of network nodes (pinned ones) a neural controller; this controller is composed of an on-line identifier based on a recurrent high-order neural network, and a sliding-mode controller. An illustrative example is included, which is composed of a network of ten nodes, with different self-dynamics, illustrating the effectiveness and good performance of the proposed control scheme.
Finite-Time Cluster Synchronization of Lur'e Networks: A Nonsmooth Approach. This paper is devoted to the finite-time cluster synchronization issue of nonlinearly coupled complex networks which consist of discontinuous Lur'e systems. On the basis of the definition of Filippov regularization process and the measurable selection theorem, the discontinuously nonlinear function is mapped into a function-valued set, then a measurable function is accordingly selected from the Fi...
Social manufacturing: A survey of the state-of-the-art and future challenges Under the growing trend of personalization and socialization, social manufacturing is an emerging technical and business practice in mass individualization paradigm that allows prosumers to build personalized products and individualized services with their partners through integrating inter-organizational manufacturing service processes. This paper makes a comprehensive literature review and a further discussion on social manufacturing via a constructive methodology. After a clarification on definition of social manufacturing, we make an analysis on current research progress including the business models, implementations architectures and frameworks, case studies, and the key enabling techniques (e.g., big data mining and cyber-physical-social system) for realizing the idea of social manufacturing. The potential impact and future challenges are pointed out as well. It is expected that this review can help readers to gain more understanding on the idea of social manufacturing.
Finite-Time Synchronization of Impulsive Dynamical Networks With Strong Nonlinearity Finite-time synchronization (FTS) of dynamical networks has received much attention in recent years, as it has fast convergence rate and good robustness. Most existing results rely heavily on some global condition such as the Lipschitz condition, which has limitations in describing the strong nonlinearity of most real systems. Dealing with strong nonlinearity in the field of FTS is still a challenging problem. In this article, the FTS problem of impulsive dynamical networks with general nonlinearity (especially strong nonlinearity) is considered. In virtue of the concept of nonlinearity strength that quantizes the network nonlinearity, local FTS criteria are established, where the range of the admissible initial values and the settling time are solved. For the networks with weak nonlinearity, global FTS criteria that unify synchronizing, inactive, and desynchronizing impulses are derived. Differing from most existing studies on FTS, the node system here does not have to satisfy the global Lipschitz condition, therefore covering more situations that are practical. Finally, numerical examples are provided to demonstrate our theoretical results.
Asymptotic and Finite-Time Cluster Synchronization of Coupled Fractional-Order Neural Networks With Time Delay This article is devoted to the cluster synchronization issue of coupled fractional-order neural networks. By introducing the stability theory of fractional-order differential systems and the framework of Filippov regularization, some sufficient conditions are derived for ascertaining the asymptotic and finite-time cluster synchronization of coupled fractional-order neural networks, respectively. In addition, the upper bound of the settling time for finite-time cluster synchronization is estimated. Compared with the existing works, the results herein are applicable for fractional-order systems, which could be regarded as an extension of integer-order ones. A numerical example with different cases is presented to illustrate the validity of theoretical results.
Finite-time synchronization of nonidentical BAM discontinuous fuzzy neural networks with delays and impulsive effects via non-chattering quantized control •Two new inequalities are developed to deal with the mismatched coefficients of the fuzzy part.•A simple but robust quantized state feedback controller is designed to overcome the effects of discontinuous activations, time delay, and nonidentical coefficients simultaneously. The designed control schemes do not utilize the sign function and can save channel resources. Moreover, novel non-chattering quantized adaptive controllers are also considered to reduce the control cost.•By utilizing 1-norm analytical technique and comparison system method, the effect of impulses on the FTS is well coped with.•Without utilizing the finite-time stability theorem in [16], several FTS criteria are obtained. Moreover, the settling time is explicitly estimated. Results of this paper can easily be extended to FTS of other classical delayed impulsive NNs with or without nonidentical coefficients.
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
A 1.8-GHz LC VCO with 1.3-GHz tuning range and digital amplitude calibration A 1.8-GHz LC VCO designed in a 0.18-/spl mu/m CMOS process achieves a very wide tuning range of 73% and measured phase noise of -123.5 dBc/Hz at a 600-kHz offset from a 1.8-GHz carrier while drawing 3.2 mA from a 1.5-V supply. The impacts of wideband operation on start-up constraints and phase noise are discussed. Tuning range is analyzed in terms of fundamental dimensionless design parameters yie...
Problem space search algorithms for resource-constrained project scheduling The Resource-Constrained Project Scheduling (RCPS) problem is a well known and challenging combinatorial optimization problem. It is a generalization of the Job Shop Scheduling problem and thus is NP-hard in the strong sense. Problem Space Search is a local search "metaheuristic" which has been shown to be effective for a variety of combinatorial optimization problems including Job Shop Scheduling. In this paper, we propose two problem space search heuristics for the RCPS problem. These heuristics are tested through intensive computational experiments on a 480-instance RCPS data set recently generated by Kolisch et al. [12]. Using this data set we compare our heuristics with a branch-and-bound algorithm developed by Demuelemeester and Herreolen [9]. The results produced by the heuristics are extremely encouraging, showing comparable performance to the branch-and-bound algorithm.
A Recursive Switched-Capacitor DC-DC Converter Achieving $2^{N}-1$ Ratios With High Efficiency Over a Wide Output Voltage Range. A Recursive Switched-Capacitor (RSC) topology is introduced that enables reconfiguration among 2 N-1 conversion ratios while achieving minimal capacitive charge-sharing loss for a given silicon area. All 2 N-1 ratios are realized by strategically interconnecting N 2:1 SC cells either in series, in parallel, or in a stacked configuration such that the number of input and ground connections are maxi...
A 12-Bit Dynamic Tracking Algorithm-Based SAR ADC With Real-Time QRS Detection A 12-bit successive approximation register (SAR) ADC based on dynamic tracking algorithm and a real-time QRS-detection algorithm are proposed. The dynamic tracking algorithm features two tracking windows which are adjacent to prediction interval. This algorithm is able to track down the input signal's variation range and automatically adjust the subrange interval and update prediction code. QRS-complex detection algorithm integrates synchronous time sequential ADC and realtime QRS-detector. The chip is fabricated in a standard 0.13 μm CMOS process with a 0.6 V supply. Measurement results show that proposed ADC exhibits 10.72 effective number of bit (ENOB) and 79.63 dB spur-free dynamic range (SFDR) at 10k Hz sample rate given 41.5 Hz sinusoid input. The DNL and INL are bounded at -0.6/0.62 LSB and -0.67/1.43 LSBs. The ADC achieves FoM of 48 fJ/conversion-step at the best case. Also, the prototype is experimented with ECG signal input and extracts the heart beat signal successfully.
1.038189
0.037143
0.031786
0.030476
0.028571
0.028571
0.028571
0.022857
0.003571
0
0
0
0
0
SpArch: Efficient Architecture for Sparse Matrix Multiplication Generalized Sparse Matrix-Matrix Multiplication (SpGEMM) is a ubiquitous task in various engineering and scientific applications. However, inner product based SpGEMM introduces redundant input fetches for mismatched nonzero operands, while outer product based approach suffers from poor output locality due to numerous partial product matrices. Inefficiency in the reuse of either inputs or outputs data leads to extensive and expensive DRAM access. To address this problem, this paper proposes an efficient sparse matrix multiplication accelerator architecture, SpArch, which jointly optimizes the data locality for both input and output matrices. We first design a highly parallelized streaming-based merger to pipeline the multiply and merge stage of partial matrices so that partial matrices are merged on chip immediately after produced. We then propose a condensed matrix representation that reduces the number of partial matrices by three orders of magnitude and thus reduces DRAM access by 5.4x. We further develop a Huffman tree scheduler to improve the scalability of the merger for larger sparse matrices, which reduces the DRAM access by another 1.8x. We also resolve the increased input matrix read induced by the new representation using a row prefetcher with near-optimal buffer replacement policy, further reducing the DRAM access by 1.5x. Evaluated on 20 benchmarks, SpArch reduces the total DRAM access by 2.8x over previous state-of-the-art. On average, SpArch achieves 4x, 19x, 18x, 17x, 1285x speedup and 6x, 164x, 435x, 307x, 62x energy savings over OuterSpace, MKL, cuSPARSE, CUSP, and ARM Armadillo, respectively.
Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training The success of DNN pruning has led to the development of energy-efficient inference accelerators that support pruned models with sparse weight and activation tensors. Because the memory layouts and dataflows in these architectures are optimized for the access patterns during inference, however, they do not efficiently support the emerging sparse training techniques. In this paper, we demonstrate (a) that accelerating sparse training requires a co-design approach where algorithms are adapted to suit the constraints of hardware, and (b) that hardware for sparse DNN training must tackle constraints that do not arise in inference accelerators. As proof of concept, we adapt a sparse training algorithm to be amenable to hardware acceleration; we then develop dataflow, data layout, and load-balancing techniques to accelerate it. The resulting system is a sparse DNN training accelerator that produces pruned models with the same accuracy as dense models without first training, then pruning, and finally retraining, a dense model. Compared to training the equivalent unpruned models using a state-of-the-art DNN accelerator without sparse training support, Procrustes consumes up to 3.26× less energy and offers up to 4× speedup across a range of models, while pruning weights by an order of magnitude and maintaining unpruned accuracy.
MatRaptor: A Sparse-Sparse Matrix Multiplication Accelerator Based on Row-Wise Product Sparse-sparse matrix multiplication (SpGEMM) is a computation kernel widely used in numerous application domains such as data analytics, graph processing, and scientific computing. In this work we propose MatRaptor, a novel SpGEMM accelerator that is high performance and highly resource efficient. Unlike conventional methods using inner or outer product as the meta operation for matrix multiplication, our approach is based on row-wise product, which offers a better tradeoff in terms of data reuse and on-chip memory requirements, and achieves higher performance for large sparse matrices. We further propose a new hardware-friendly sparse storage format, which allows parallel compute engines to access the sparse data in a vectorized and streaming fashion, leading to high utilization of memory bandwidth. We prototype and simulate our accelerator architecture using gem5 on a diverse set of matrices. Our experiments show that MatRaptor achieves 129.2× speedup over single-threaded CPU, 8.8× speedup over GPU and 1.8× speedup over the state-of-the-art SpGEMM accelerator (OuterSPACE). MatRaptor also has 7.2× lower power consumption and 31.3× smaller area compared to OuterSPACE.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
Tensor-matrix products with a compressed sparse tensor The Canonical Polyadic Decomposition (CPD) of tensors is a powerful tool for analyzing multi-way data and is used extensively to analyze very large and extremely sparse datasets. The bottleneck of computing the CPD is multiplying a sparse tensor by several dense matrices. Algorithms for tensor-matrix products fall into two classes. The first class saves floating point operations by storing a compressed tensor for each dimension of the data. These methods are fast but suffer high memory costs. The second class uses a single uncompressed tensor at the cost of additional floating point operations. In this work, we bridge the gap between the two approaches and introduce the compressed sparse fiber (CSF) a data structure for sparse tensors along with a novel parallel algorithm for tensor-matrix multiplication. CSF offers similar operation reductions as existing compressed methods while using only a single tensor structure. We validate our contributions with experiments comparing against state-of-the-art methods on a diverse set of datasets. Our work uses 58% less memory than the state-of-the-art while achieving 81% of the parallel performance on 16 threads.
Gorgon: Accelerating Machine Learning from Relational Data Accelerator deployment in data centers remains limited despite domain-specific architectures’ promise of higher performance. Rapidly-changing applications and high nre cost make deploying fixed-function accelerators at scale untenable. More flexible than dsas, fpgas are gaining traction but remain hampered by cumbersome programming models, long synthesis times, and slow clocks. Coarse-grained reconfigurable architectures (cgra) are a compelling alternative and offer efficiency while retaining programmability—by providing general-purpose hardware and communication patterns, a single cgra targets multiple application domains.One emerging application is in-database machine learning: a high-performance, low-friction interface for analytics on large databases. We co-locate database and machine learning processing in a unified reconfigurable data analytics accelerator, Gorgon, which flexibly shares resources between db and ml without compromising performance or incurring excessive overheads in either domain. We distill and integrate database parallel patterns into an existing ML-focused cgra, increasing area by less than 4% while outperforming a multicore software baseline by 1500X. We also explore the performance impact of unifying db and ml in a single accelerator, showing up to 4x speedup over split accelerators.
Evaluating Fast Algorithms for Convolutional Neural Networks on FPGAs. In recent years, convolutional neural networks (CNNs) have become widely adopted for computer vision tasks. Field-programmable gate arrays (FPGAs) have been adequately explored as a promising hardware accelerator for CNNs due to its high performance, energy efficiency, and reconfigurability. However, prior FPGA solutions based on the conventional convolutional algorithm is often bounded by the com...
SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning The attention mechanism is becoming increasingly popular in Natural Language Processing (NLP) applications, showing superior performance than convolutional and recurrent architectures. However, general-purpose platforms such as CPUs and GPUs are inefficient when performing attention inference due to complicated data movement and low arithmetic intensity. Moreover, existing NN accelerators mainly f...
Memristor-Based Material Implication (IMPLY) Logic: Design Principles and Methodologies Memristors are novel devices, useful as memory at all hierarchies. These devices can also behave as logic circuits. In this paper, the IMPLY logic gate, a memristor-based logic circuit, is described. In this memristive logic family, each memristor is used as an input, output, computational logic element, and latch in different stages of the computing process. The logical state is determined by the resistance of the memristor. This logic family can be integrated within a memristor-based crossbar, commonly used for memory. In this paper, a methodology for designing this logic family is proposed. The design methodology is based on a general design flow, suitable for all deterministic memristive logic families, and includes some additional design constraints to support the IMPLY logic family. An IMPLY 8-bit full adder based on this design methodology is presented as a case study.
Encapsulation of parallelism in the Volcano query processing system Volcano is a new dataflow query processing system we have developed for database systems research and education. The uniform interface between operators makes Volcano extensible by new operators. All operators are designed and coded as if they were meant for a single-process system only. When attempting to parallelize Volcano, we had to choose between two models of parallelization, called here the bracket and operator models. We describe the reasons for not choosing the bracket model, introduce the novel operator model, and provide details of Volcano's exchange operator that parallelizes all other operators. It allows intra-operator parallelism on partitioned datasets and both vertical and horizontal inter-operator parallelism. The exchange operator encapsulates all parallelism issues and therefore makes implementation of parallel database algorithms significantly easier and more robust. Included in this encapsulation is the translation between demand-driven dataflow within processes and data-driven dataflow between processes. Since the interface between Volcano operators is similar to the one used in “real,” commercial systems, the techniques described here can be used to parallelize other query processing engines.
The path to the software-defined radio receiver After being the subject of speculation for many years, a software-defined radio receiver concept has emerged that is suitable for mobile handsets. A key step forward is the realization that in mobile handsets, it is enough to receive one channel with any bandwidth, situated in any band. Thus, the front-end can be tuned electronically. Taking a cue from a digital front-end, the receiver's flexible ...
A monolithic buck DC-DC converter with on-chip PWM circuit A monolithic CMOS voltage-mode, buck DC-DC converter with integrated power switches and new on-chip pulse-width modulation (PWM) technique of switching control is presented in this paper. The PWM scheme is constructed by a CMOS ring oscillator, which duty is compensated by a pseudo hyperbola curve current generator to achieve almost constant frequency operation. The minimum operating voltage of this voltage-mode buck DC-DC converter is 1.2V. The proposed buck DC-DC converter with a chip area of 0.82mm^2 is fabricated with a standard 0.35-@mm CMOS process. The experimental results show that the converter is well regulated over an output range from 0.3 to 1.2V, with an input voltage of 1.5V. The maximum efficiency of the converter is 88%, and its efficiency is kept above 80% over an output power ranging from 30 to 300mW.
Cross-Tenant Side-Channel Attacks in PaaS Clouds We present a new attack framework for conducting cache-based side-channel attacks and demonstrate this framework in attacks between tenants on commercial Platform-as-a-Service (PaaS) clouds. Our framework uses the FLUSH-RELOAD attack of Gullasch et al. as a primitive, and extends this work by leveraging it within an automaton-driven strategy for tracing a victim's execution. We leverage our framework first to confirm co-location of tenants and then to extract secrets across tenant boundaries. We specifically demonstrate attacks to collect potentially sensitive application data (e.g., the number of items in a shopping cart), to hijack user accounts, and to break SAML single sign-on. To the best of our knowledge, our attacks are the first granular, cross-tenant, side-channel attacks successfully demonstrated on state-of-the-art commercial clouds, PaaS or otherwise.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.04032
0.0416
0.0408
0.04
0.04
0.0272
0.02
0.008
0.000027
0
0
0
0
0
A Reconfigurable Mostly-Digital Delta-Sigma ADC With a Worst-Case FOM of 160 dB This paper presents a second-generation mostly-digital background-calibrated oversampling ADC based on voltage- controlled ring oscillators (VCROs). Its performance is in line with the best modulator ADCs published to date, but it occupies much less circuit area, is reconfigurable, and consists mainly of digital circuitry. Enhancements relative to the first-generation version include digitally background-calibrated open-loop conversion in the VCRO to increase ADC bandwidth and enable operation from a single low-voltage power supply, quadrature coupled ring oscillators to reduce quantization noise, digital over-range correction to improve dynamic range and enable graceful overload behavior, and various circuit-level improvements. The ADC occupies 0.075 mm in a 65 nm CMOS process and operates from a single 0.9–1.2 V supply. Its sample-rate is tunable from 1.3 to 2.4 GHz over which the SNDR spans 70–75 dB, the bandwidth spans 5–37.5 MHz, and the minimum SNDR+ 10log(bandwidth/power dissipation) figure of merit (FOM) is 160 dB.
Signal Folding in A/D Converters Signal folding appears in A/D converters (ADCs) in various ways. In this paper, the evolution of this technique is derived from the fundamentals of quantization to obtain systematic insights. We look upon folding as an automatic multiplexing of zero crossings, which simplifies hardware while preserving the high speed and low latency of a flash ADC. By appreciating similarities between the well-kno...
A 45 nm Resilient Microprocessor Core for Dynamic Variation Tolerance A 45 nm microprocessor core integrates resilient error-detection and recovery circuits to mitigate the clock frequency (FCLK) guardbands for dynamic parameter variations to improve throughput and energy efficiency. The core supports two distinct error-detection designs, allowing a direct comparison of the relative trade-offs. The first design embeds error-detection sequential (EDS) circuits in critical paths to detect late timing transitions. In addition to reducing the Fclk guardbands for dynamic variations, the embedded EDS design can exploit path-activation rates to operate the microprocessor faster than infrequently-activated critical paths. The second error-detection design offers a less-intrusive approach for dynamic timing-error detection by placing a tunable replica circuit (TRC) per pipeline stage to monitor worst-case delays. Although the TRCs require a delay guardband to ensure the TRC delay is always slower than critical-path delays, the TRC design captures most of the benefits from the embedded EDS design with less implementation overhead. Furthermore, while core min-delay constraints limit the potential benefits of the embedded EDS design, a salient advantage of the TRC design is the ability to detect a wider range of dynamic delay variation, as demonstrated through low supply voltage (VCC) measurements. Both error-detection designs interface with error-recovery techniques, enabling the detection and correction of timing errors from fast-changing variations such as high-frequency VCC droops. The microprocessor core also supports two separate error-recovery techniques to guarantee correct execution even if dynamic variations persist. The first technique requires clock control to replay errant instructions at 1/2FCLK. In comparison, the second technique is a new multiple-issue instruction replay design that corrects errant instructions with a lower performance penalty and without requiring clock control. Silico- - n measurements demonstrate that resilient circuits enable a 41% throughput gain at equal energy or a 22% energy reduction at equal throughput, as compared to a conventional design when executing a benchmark program with a 10% VCC droop. In addition, the microprocessor includes a new adaptive clock control circuit that interfaces with the resilient circuits and a phase-locked loop (PLL) to track recovery cycles and adapt to persistent errors by dynamically changing Fclk f°Γ maximum efficiency.
A Mostly Digital VCO-Based CT-SDM With Third-Order Noise Shaping. This paper presents the architectural concept and implementation of a mostly digital voltage-controlled oscillator-analog-to-digital converter (VCO-ADC) with third-order quantization noise shaping. The system is based on the combination of a VCO and a digital counter. It is shown how this combination can function as a continuous-time integrator to form a high-order continuous-time sigma-delta modu...
A 0.5-V 1.6-mW 2.4-GHz Fractional-N All-Digital PLL for Bluetooth LE With PVT-Insensitive TDC Using Switched-Capacitor Doubler in 28-nm CMOS. This paper proposes an ultra-low-voltage (ULV) fractional-N all-digital PLL (ADPLL) powered from a single 0.5-V supply. While its digitally controlled oscillator (DCO) runs directly at 0.5 V, an internal switched-capacitor dc-dc converter “doubles” the supply voltage to all the digital circuitry and particularly regulates the time-to-digital converter (TDC) supply to stabilize its resolution, thus...
A 250 mV 7.5 μW 61 dB SNDR SC ΔΣ Modulator Using Near-Threshold-Voltage-Biased Inverter Amplifiers in 130 nm CMOS An ultra-low voltage switched-capacitor (SC) ΔΣ converter running at a record low supply voltage of only 250 mV is introduced. System level aspects are discussed and special circuit techniques described, that enable robust operation at such a low supply voltage. Using a SC biasing approach, inverter-based integrators are realized with overdrives close to the transistor threshold voltage Vth while compensating for process, voltage and temperature (PVT) variation. Biasing voltages are generated on-chip using a novel level shifting circuit, that overcomes headroom limitations due to saturation voltage Vsat. With an oversampling ratio (OSR) of 70 and a sampling frequency (fS) of 1.4 MHz at 250 mV power supply the converter achieves 61 dB SNDR in 10 kHz bandwidth while consuming a total power of 7.5 μW.
A 180-mV subthreshold FFT processor using a minimum energy design methodology In emerging embedded applications such as wireless sensor networks, the key metric is minimizing energy dissipation rather than processor speed. Minimum energy analysis of CMOS circuits estimates the optimal operating point of clock frequencies, supply voltage, and threshold voltage according to A. Chandrakasan et al. (see ibid., vol.27, no.4, p.473-84, Apr. 1992). The minimum energy analysis show...
An octave-range watt-level fully integrated CMOS switching power mixer array for linearization and back-off efficiency improvement
Computing size-independent matrix problems on systolic array processors A methodology to transform dense to band matrices is presented in this paper. This transformation, is accomplished by triangular blocks partitioning, and allows the implementation of solutions to problems with any given size, by means of contraflow systolic arrays, originally proposed by H.T. Kung. Matrix-vector and matrix-matrix multiplications are the operations considered here.The proposed transformations allow the optimal utilization of processing elements (PEs) of the systolic array when dense matrix are operated. Every computation is made inside the array by using adequate feedback. The feedback delay time depends only on the systolic array size.
Wireless Communications Transmitter Performance Enhancement Using Advanced Signal Processing Algorithms Running in a Hybrid DSP/FPGA Platform This paper deals with digital base band signal processing algorithms, which are seen as enabling technologies for software-enabled radios, that are intended for the correction of the analog front end. In particular, this paper focuses on the design, optimization and testability of predistortion functions suitable for the linearization of narrowband and wideband transmitters developed with a hybrid DSP/FPGA platform. To select the best algorithm for the identification of the predistortion function, singular value decomposition, recursive least squares (RLS), and QR-RLS algorithms are implemented on the same digital signal processor; and, the computation complexity, time, accuracy and the required resources are studied. The hardware implementation of the predistortion function is then carefully performed, in order to meet the real time execution requirements.
Replica compensated linear regulators for supply-regulated phase-locked loops Supply-regulated phase-locked loops rely upon the VCO voltage regulator to maintain a low sensitivity to supply noise and hence low overall jitter. By analyzing regulator supply rejection, we show that in order to simultaneously meet the bandwidth and low dropout requirements, previous regulator implementations used in supply-regulated PLLs suffer from unfavorable tradeoffs between power supply rejection and power consumption. We therefore propose a compensation technique that places the regulator's amplifier in a local replica feedback loop, stabilizing the regulator by increasing the amplifier bandwidth while lowering its gain. Even though the forward gain of the amplifier is reduced, supply noise affects the replica output in addition to the actual output, and therefore the amplifier's gain to reject supply noise is effectively restored. Analysis shows that for reasonable mismatch between the replica and actual loads, regulator performance is uncompromised, and experimental results from a 90 nm SOI test chip confirm that with the same power consumption, the proposed regulator achieves at least 4 dB higher supply rejection than the previous regulator design. Furthermore, simulations show that if not for other supply rejection-limiting components in the PLL, the supply rejection improvement of the proposed regulator is greater than 15 dB.
A 2.4GHz sub-harmonically injection-locked PLL with self-calibrated injection timing A low-phase-noise integer-N phase-locked loop (PLL) is attractive in many applications, such as clock generation and analog-to-digital conversion. The sub-harmonically injection-locked technique, sub-sampling technique, and the multiplying delay-locked loop (MDLL) can significantly improve the phase noise of an integer-N PLL. In the sub-harmonically injection-locked technique, to inject a low-frequency reference clock into a high-frequency voltage-controlled oscillator (VCO), the injection timing should be tightly controlled. If the injection timing varies due to process variation, it may cause a large reference spur or even cause the PLL to fails to lock. A sub-harmonically injection-locked PLL (SILPLL) adopts a sub-sampling phase-detector (PD) to automatically align the phase between the injection pulse and a VCO. However, a sub-sampling PD has a small capture range and a low bandwidth. The high-frequency non-linear effects of a sub-sampling PD may degrade the accuracy and limit the maximum speed of a VCO. In addition, a frequency-locked loop is needed for a sub-sampling PD. A delay line is manually adjusted to achieve the correct injection timing. However, the delay line is sensitive to process variations. Thus, the injection timing should be calibrated.
An efficient low-cost fixed-point digital down converter with modified filter bank In radar system, as the most important part of IF radar receiver, digital down converter (DDC) extracts the baseband signal needed from modulated IF signal, and down-samples the signal with decimation factor of 20. This paper proposes an efficient low-cost structure of DDC, including NCO, mixer and a modified filter bank. The modified filter bank adopts a high-efficiency structure, including a 5-stage CIC filter, a 9-tap CFIR filter and a 15-tap HB filter, which reduces the complexity and cost of implementation compared with the traditional filter bank. Then an optimized fixed-point programming is designed in order to implement DDC on fixed-point DSP or FPGA. The simulation results show that the proposed DDC achieves an expectant specification in application of IF radar receiver.
A 12-Bit Dynamic Tracking Algorithm-Based SAR ADC With Real-Time QRS Detection A 12-bit successive approximation register (SAR) ADC based on dynamic tracking algorithm and a real-time QRS-detection algorithm are proposed. The dynamic tracking algorithm features two tracking windows which are adjacent to prediction interval. This algorithm is able to track down the input signal's variation range and automatically adjust the subrange interval and update prediction code. QRS-complex detection algorithm integrates synchronous time sequential ADC and realtime QRS-detector. The chip is fabricated in a standard 0.13 μm CMOS process with a 0.6 V supply. Measurement results show that proposed ADC exhibits 10.72 effective number of bit (ENOB) and 79.63 dB spur-free dynamic range (SFDR) at 10k Hz sample rate given 41.5 Hz sinusoid input. The DNL and INL are bounded at -0.6/0.62 LSB and -0.67/1.43 LSBs. The ADC achieves FoM of 48 fJ/conversion-step at the best case. Also, the prototype is experimented with ECG signal input and extracts the heart beat signal successfully.
1.071472
0.063333
0.063333
0.063333
0.063333
0.03875
0.016667
0.000278
0
0
0
0
0
0
Periodic Event-Triggered Synchronization for Discrete-Time Complex Dynamical Networks In this article, we investigate the periodic event-triggered synchronization of discrete-time complex dynamical networks (CDNs). First, a discrete-time version of periodic event-triggered mechanism (ETM) is proposed, under which the sensors sample the signals in a periodic manner. But whether the sampling signals are transmitted to controllers or not is determined by a predefined periodic ETM. Compared with the common ETMs in the field of discrete-time systems, the proposed method avoids monitoring the measurements point-to-point and enlarges the lower bound of the inter-event intervals. As a result, it is beneficial to save both the energy and communication resources. Second, the “discontinuous” Lyapunov functionals are constructed to deal with the sawtooth constraint of sampling signals. The functionals can be viewed as the discrete-time extension for those discontinuous ones in continuous-time fields. Third, sufficient conditions for the ultimately bounded synchronization are derived for the discrete-time CDNs with or without considering communication delays, respectively. A calculation method for simultaneously designing the triggering parameter and control gains is developed such that the estimation of error level is accurate as much as possible. Finally, the simulation examples are presented to show the effectiveness and improvements of the proposed method.
Pinning synchronization of delayed complex networks under self-triggered control In this paper, the pinning synchronization of delayed complex networks (DCNs) is investigated under self-triggered control (STC). The framework of synchronization analysis of DCNs under STC is established. Specifically, a new dynamic event-triggered scheme (DETS) is proposed for the DCNs firstly. The scheme concerns internal dynamic variables, which plays a crucial role in ensuring the exclusion of Zeno behavior. Secondly, to avoid continuous monitoring triggering condition, an effective self-triggered scheme (STS) is proposed. Different from the previous works, based on the extended Grönwall inequality, the lower bound for the inter-event time of the STS is estimated explicitly. A numerical example is provided to demonstrate the effectiveness of the theoretical results.
The Emergence of Intelligent Enterprises: From CPS to CPSS When IEEE Intelligent Systems solicited ideas for a new department, cyberphysical systems(CPS) received overwhelming support.Cyber-Physical-Social Systems is the new name for CPS. CPSS is the enabling platform technology that will lead us to an era of intelligent enterprises and industries. Internet use and cyberspace activities have created an overwhelming demand for the rapid development and application of CPSS. CPSS must be conducted with a multidisciplinary approach involving the physical, social, and cognitive sciences and that Al-based intelligent systems will be key to any successful construction and deployment.
Pinning impulsive directed coupled delayed dynamical network and its applications The main objective of the present paper is to further investigate pinning synchronisation of a complex delayed dynamical network with directionally coupling by a single impulsive controller. By developing the analysis procedure of pinning impulsive stability for undirected coupled dynamical network previously, some simple yet general criteria of pinning impulsive synchronisation for such directed coupled network are derived analytically. It is shown that a single impulsive controller can always pin a given directed coupled network to a desired homogenous solution, including an equilibrium point, a periodic orbit, or a chaotic orbit. Subsequently, the theoretical results are illustrated by a directed small-world complex network which is a cellular neural network (CNN) and a directed scale-free complex network with the well-known Hodgkin-Huxley neuron oscillators. Numerical simulations are finally given to demonstrate the effectiveness of the proposed control methodology.
Event-Based Synchronization of Heterogeneous Complex Networks Subject to Transmission Delays. In this paper, the problem of event-based synchronization of heterogeneous complex networks is investigated. Specifically, the influence of transmission delays on event-based synchronization is considered. The designed distributed controller for each nonidentical node in heterogeneous network includes reference generator (RG) and robust regulator. Event-based communication protocol is utilized to ...
Passivity And Synchronisation Of Complex Dynamical Networks With Multiple Derivative Couplings Two network models with multiple derivative couplings and different dimensions of output and input vectors are investigated in this paper. The problem of passivity for the proposed network models is analysed by utilising some inequality techniques and Lyapunov functional method, and several synchronisation conditions for complex dynamical networks with multiple derivative couplings (CDNMDC) are given. Moreover, by employing adaptive state feedback control strategy, some sufficient conditions for guaranteeing passivity and synchronisation of CDNMDC are obtained. In the end, we give two examples to verify the effectiveness of the results.
Fixed-Time Synchronization of Complex Dynamical Networks: A Novel and Economical Mechanism Fixed-time synchronization of complex networks is investigated in this article. First, a completely novel lemma is introduced to prove the fixed-time stability of the equilibrium of a general ordinary differential system, which is less conservative and has a simpler form than those in the existing literature. Then, sufficient conditions are presented to realize synchronization of a complex network (with a target system) within a settling time via three different kinds of simple controllers. In general, controllers designed to achieve fixed-time stability consist of three terms and are discontinuous. However, in our mechanisms, the controllers only contain two terms or even one term and are continuous. Thus, our controllers are simpler and of more practical applicability. Finally, three examples are provided to illustrate the correctness and effectiveness of our results.
Event-triggered resilient control for cyber-physical systems under periodic DoS jamming attacks This paper considers the event-triggered resilient control problem for cyber-physical systems against periodic denial-of-service (DoS) attacks. Firstly, a novel event-triggered scheme without Zeno behaviors is presented to save network resources and eliminate the occurrence of invalid events during periodic DoS attacks. Subsequently, by constructing a predictor-based event-triggered control framework, the upper bound of the prediction error related to periodic DoS attack parameters is given. Furthermore, sufficient conditions related to periodic DoS attack parameters are established to ensure the input-to-state stability. It is shown that the proposed design method can achieve better system performances than the existing ones when the system suffers the same degree of periodic DoS attacks. Finally, the theoretical research results are validated through a batch reactor system model.
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
The CORDIC Trigonometric Computing Technique The COordinate Rotation DIgital Computer(CORDIC) is a special-purpose digital computer for real-time airborne computation. In this computer, a unique computing technique is employed which is especially suitable for solving the trigonometric relationships involved in plane coordinate rotation and conversion from rectangular to polar coordinates. CORDIC is an entire-transfer computer; it contains a special serial arithmetic unit consisting of three shift registers, three adder-subtractors, and special interconnections. By use of a prescribed sequence of conditional additions or subtractions, the CORDIC arithmetic unit can be controlled to solve either set of the following equations: Y' = K(Y cos¿ + X sin¿) X' = K(X cos¿ - Y sin¿), or R = K¿X2 + Y2 ¿ = tan-1 Y/X, where K is an invariable constant. This special arithmetic unit is also suitable for other computations such as multiplication, division, and the conversion between binary and mixed radix number systems. However, only the trigonometric algorithms used in this computer and the instrumentation of these algorithms are discussed in this paper.
A simple graph theoretic characterization of reachability for positive linear systems In this paper we consider discrete-time linear positive systems, that is systems defined by a pair (A,B) of non-negative matrices. We study the reachability of such systems which in this case amounts to the freedom of steering the state in the positive orthant by using non-negative control sequences. This problem was solved recently [Canonical forms for positive discrete-time linear control systems, Linear Algebra Appl., 310 (2000) 49]. However we derive here necessary and sufficient conditions for reachability in a simpler and more compact form. These conditions are expressed in terms of particular paths in the graph which is naturally associated with the system.
Following the Software-Radio-Idea in the design concept of base stations-possibilities and limitations Even if the software radio (Swr) in the literally definition sense is still a vision, there are essential reasons why the concept of a software defined radio architecture is meaningful for commercial applications and for base stations. This article should outline from a manufacturer’s of mobile infrastructure point of view the benefit out of a software radio attempt for the manufacturer as well as for the mobile network provider. The technological challenges and actual borders shall be discussed and the activities in this field at Alcatel’s Research & Innovations (R & I) department shall be presented.
Estimating stable delay intervals with a discretized Lyapunov-Krasovskii functional formulation. In general, a system with time delay may have multiple stable delay intervals. Especially, a stable delay interval does not always contain zero. Asymptotically accurate stability conditions such as discretized Lyapunov–Krasovskii functional (DLF) method and sum-of-square (SOS) method are especially effective for such systems. In this article, a DLF-based method is proposed to estimate the maximal stable delay interval accurately without using bisection when one point in this interval is given. The method is formulated as a generalized eigenvalue problem (GEVP) of linear matrix inequalities (LMIs), and an accurate estimate may be reached by iteration either in a finite number of steps or asymptotically. The coupled differential–difference equation formulation is used to illustrate the method. However, the idea can be easily adapted to the traditional differential–difference equation setting.
Charge-redistribution based quadratic operators for neural feature extraction. This paper presents a SAR converter based mixed-signal multiplier for the feature extraction of neural signals using quadratic operators. After a thorough analysis of design principles and circuit-level aspects, the proposed architecture is explored for the implementation of two quadratic operators often used for the characterization of neural activity, the moving average energy (MAE) operator and...
1.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0
0
0
0
0
0
A flash-TDC hybrid ADC architecture A flash-TDC hybrid ADC architecture is proposed in this paper. The operating principle relies on measuring the impact of the input amplitude on the delay of the comparators in the flash. TDCs capture this timing information, which is mapped to an output digital code using simple digital logic to provide additional bits of resolution.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Comparison of initial conditions for distributed algorithms on anonymous networks This paper studies the "usefulness" of initial conditions for distributed algorithms on anonymous networks. In the literature, several initial conditions such as making one vertex a leader, giving the number of vertex to each vertices, and so on, have been considered. In this paper, we study a relation between the initial condition by considering transformation algorithm from one initial condition to another. For such transformation algorithms, we consider in this paper, both deterministic and randomized distributed algorithms. For each deterministic and randomized transformation type, we show that the relation induces an infinite lattice structure among equivalence classes of initial conditions.
An Identity-Free and On-Demand Routing Scheme against Anonymity Threats in Mobile Ad Hoc Networks Introducing node mobility into the network also introduces new anonymity threats. This important change of the concept of anonymity has recently attracted attentions in mobile wireless security research. This paper presents identity-free routing and on-demand routing as two design principles of anonymous routing in mobile ad hoc networks. We devise ANODR (ANonymous On-Demand Routing) as the needed anonymous routing scheme that is compliant with the design principles. Our security analysis and simulation study verify the effectiveness and efficiency of ANODR.
Space-Optimal Counting in Population Protocols. In this paper, we study the fundamental problem of counting, which consists in computing the size of a system. We consider the distributed communication model of population protocols of finite state, anonymous and asynchronous mobile devices agents communicating in pairs according to a fairness condition. This work significantly improves the previous results known for counting in this model, in terms of exact space complexity. We present and prove correct the first space-optimal protocols solving the problem for two classical types of fairness, global and weak. Both protocols require no initialization of the counted agents. The protocol designed for global fairness, surprisingly, uses only one bit of memory two states per counted agent. The protocol, functioning under weak fairness, requires the necessary $$\\log P$$ bits P states, per counted agent to be able to count up to P agents. Interestingly, this protocol exploits the intriguing Gros sequence of natural numbers, which is also used in the solutions to the Chinese Rings and the Hanoi Towers puzzles.
Estimating and sampling graphs with multidimensional random walks Estimating characteristics of large graphs via sampling is a vital part of the study of complex networks. Current sampling methods such as (independent) random vertex and random walks are useful but have drawbacks. Random vertex sampling may require too many resources (time, bandwidth, or money). Random walks, which normally require fewer resources per sample, can suffer from large estimation errors in the presence of disconnected or loosely connected graphs. In this work we propose a new m-dimensional random walk that uses m dependent random walkers. We show that the proposed sampling method, which we call Frontier sampling, exhibits all of the nice sampling properties of a regular random walk. At the same time, our simulations over large real world graphs show that, in the presence of disconnected or loosely connected components, Frontier sampling exhibits lower estimation errors than regular random walks. We also show that Frontier sampling is more suitable than random vertex sampling to sample the tail of the degree distribution of the graph.
The price of validity in dynamic networks Massive-scale self-administered networks like Peer-to-Peer and Sensor Networks have data distributed across thousands of participant hosts. These networks are highly dynamic with short-lived hosts being the norm rather than an exception. In recent years, researchers have investigated best-effort algorithms to efficiently process aggregate queries (e.g., sum, count, average, minimum and maximum) [6, 13, 21, 34, 35, 37] on these networks. Unfortunately, query semantics for best-effort algorithms are ill-defined, making it hard to reason about guarantees associated with the result returned. In this paper, we specify a correctness condition, single-site validity, with respect to which the above algorithms are best-effort. We present a class of algorithms that guarantee validity in dynamic networks. Experiments on real-life and synthetic network topologies validate performance of our algorithms, revealing the hitherto unknown price of validity.
Information dissemination in highly dynamic graphs We investigate to what extent flooding and routing is possible if the graph is allowed to change unpredictably at each time step. We study what minimal requirements are necessary so that a node may correctly flood or route a message in a network whose links may change arbitrarily at any given point, subject to the condition that the underlying graph is connected. We look at algorithmic constraints such as limited storage, no knowledge of an upper bound on the number of nodes, and no usage of identifiers. We look at flooding as well as routing to some existing specified destination and give algorithms.
Information spreading in stationary Markovian evolving graphs Markovian evolving graphs [2] are dynamic-graph models where the links among a fixed set of nodes change during time according to an arbitrary Markovian rule. They are extremely general and they can well describe important dynamic-network scenarios.
Exploration of the T-Interval-Connected Dynamic Graphs: The Case of the Ring In this paper, we study the T-interval-connected dynamic graphs from the point of view of the time necessary and sufficient for their exploration by a mobile entity (agent). A dynamic graph (more precisely, an evolving graph) is T-interval-connected (T ≤ 1) if, for every window of T consecutive time steps, there exists a connected spanning subgraph that is stable (always present) during this period. This property of connection stability over time was introduced by Kuhn, Lynch and Oshman [6] (STOC 2010). We focus on the case when the underlying graph is a ring of size n, and we show that the worst-case time complexity for the exploration problem is 2n '—' T '—' Θ(1) time units if the agent knows the dynamics of the graph, and <InlineEquation ID=\"IEq1\" <EquationSource Format=\"TEX\"$n+ \\frac{n}{\\max\\{1, T-1\\} } (\\delta-1) \\pm \\Theta(\\delta)$</EquationSource> </InlineEquation> time units otherwise, where ﾿ is the maximum time between two successive appearances of an edge.
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
The CORDIC Trigonometric Computing Technique The COordinate Rotation DIgital Computer(CORDIC) is a special-purpose digital computer for real-time airborne computation. In this computer, a unique computing technique is employed which is especially suitable for solving the trigonometric relationships involved in plane coordinate rotation and conversion from rectangular to polar coordinates. CORDIC is an entire-transfer computer; it contains a special serial arithmetic unit consisting of three shift registers, three adder-subtractors, and special interconnections. By use of a prescribed sequence of conditional additions or subtractions, the CORDIC arithmetic unit can be controlled to solve either set of the following equations: Y' = K(Y cos¿ + X sin¿) X' = K(X cos¿ - Y sin¿), or R = K¿X2 + Y2 ¿ = tan-1 Y/X, where K is an invariable constant. This special arithmetic unit is also suitable for other computations such as multiplication, division, and the conversion between binary and mixed radix number systems. However, only the trigonometric algorithms used in this computer and the instrumentation of these algorithms are discussed in this paper.
Implementing aggregation and broadcast over Distributed Hash Tables Peer-to-peer (P2P) networks represent an effective way to share information, since there are no central points of failure or bottleneck. However, the flip side to the distributive nature of P2P networks is that it is not trivial to aggregate and broadcast global information efficiently. We believe that this aggregation/broadcast functionality is a fundamental service that should be layered over existing Distributed Hash Tables (DHTs), and in this work, we design a novel algorithm for this purpose. Specifically, we build an aggregation/broadcast tree in a bottom-up fashion by mapping nodes to their parents in the tree with a parent function. The particular parent function family we propose allows the efficient construction of multiple interior-node-disjoint trees, thus preventing single points of failure in tree structures. In this way, we provide DHTs with an ability to collect and disseminate information efficiently on a global scale. Simulation results demonstrate that our algorithm is efficient and robust.
Modeling of software radio aspects by mapping of SDL and CORBA With the evolution of 3rd generation mobile communications standardization, the software radio concept has the potential to offer a pragmatic solution - a software implementation that allows the mobile terminal to adapt dynamically to its radio environment. The mapping of SDL and CORBA mechanisms is introduced, in order to provide a generic platform for the implementation of future mobile services, supporting standardized interfaces and manufacturer platform independent object and service functionality description. For the functional entity diagram model, it is proposed that the functional entities be designed as objects, the functional entities group as 'open' object oriented SDL platforms, and the interfaces between them as CORBA IDLs, communicating via the ORB in a generic implementation and location independent way. The functional entity groups are proposed to be modeled as SDL block types, while the functional entities and sub-entities as SDL process and service types. The objects interact with each other like client or server objects requesting or receiving services from other objects. Every object has a CORBA IDL interface, which allows every component to be distributed in an optimum way by providing a standardized infrastructure, ensuring interoperability, flexibility, reusability, transparency and management capabilities.
Kinesis: a security incident response and prevention system for wireless sensor networks This paper presents Kinesis, a security incident response and prevention system for wireless sensor networks, designed to keep the network functional despite anomalies or attacks and to recover from attacks without significant interruption. Due to the deployment of sensor networks in various critical infrastructures, the applications often impose stringent requirements on data reliability and service availability. Given the failure- and attack-prone nature of sensor networks, it is a pressing concern to enable the sensor networks provide continuous and unobtrusive services. Kinesis is quick and effective in response to incidents, distributed in nature, and dynamic in selecting response actions based on the context. It is lightweight in terms of response policy specification, and communication and energy overhead. A per-node single timer based distributed strategy to select the most effective response executor in a neighborhood makes the system simple and scalable, while achieving proper load distribution and redundant action optimization. We implement Kinesis in TinyOS and measure its performance for various application and network layer incidents. Extensive TOSSIM simulations and testbed experiments show that Kinesis successfully counteracts anomalies/attacks and behaves consistently under various attack scenarios and rates.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.060492
0.045747
0.045747
0.025545
0.016672
0.006352
0.001429
0.000569
0.000002
0
0
0
0
0
An IoT Framework for Heart Disease Prediction Based on MDCNN Classifier Nowadays, heart disease is the leading cause of death worldwide. Predicting heart disease is a complex task since it requires experience along with advanced knowledge. Internet of Things (IoT) technology has lately been adopted in healthcare systems to collect sensor values for heart disease diagnosis and prediction. Many researchers have focused on the diagnosis of heart disease, yet the accuracy of the diagnosis results is low. To address this issue, an IoT framework is proposed to evaluate heart disease more accurately using a Modified Deep Convolutional Neural Network (MDCNN). The smartwatch and heart monitor device that is attached to the patient monitors the blood pressure and electrocardiogram (ECG). The MDCNN is utilized for classifying the received sensor data into normal and abnormal. The performance of the system is analyzed by comparing the proposed MDCNN with existing deep learning neural networks and logistic regression. The results demonstrate that the proposed MDCNN based heart disease prediction system performs better than other methods. The proposed method shows that for the maximum number of records, the MDCNN achieves an accuracy of 98.2 which is better than existing classifiers.
A wearable smartphone-based platform for real-time cardiovascular disease detection via electrocardiogram processing. Cardiovascular disease (CVD) is the single leading cause of global mortality and is projected to remain so. Cardiac arrhythmia is a very common type of CVD and may indicate an increased risk of stroke or sudden cardiac death. The ECG is the most widely adopted clinical tool to diagnose and assess the risk of arrhythmia. ECGs measure and display the electrical activity of the heart from the body surface. During patients' hospital visits, however, arrhythmias may not be detected on standard resting ECG machines, since the condition may not be present at that moment in time. While Holter-based portable monitoring solutions offer 24-48 h ECG recording, they lack the capability of providing any real-time feedback for the thousands of heart beats they record, which must be tediously analyzed offline. In this paper, we seek to unite the portability of Holter monitors and the real-time processing capability of state-of-the-art resting ECG machines to provide an assistive diagnosis solution using smartphones. Specifically, we developed two smartphone-based wearable CVD-detection platforms capable of performing real-time ECG acquisition and display, feature extraction, and beat classification. Furthermore, the same statistical summaries available on resting ECG machines are provided.
An intelligent diagnosis system based on principle component analysis and ANFIS for the heart valve diseases In this paper, an intelligent diagnosis system based on principle component analysis (PCA) and adaptive network based on fuzzy inference system (ANFIS) for the heart valve disease is introduced. This intelligent system deals with combination of the feature extraction and classification from measured Doppler signal waveforms at the heart valve using the Doppler ultrasound (DHS). Here, the wavelet entropy is used as features. This intelligent system has three phases. In pre-processing phase, the data acquisition and pre-processing for DHS signals are performed. In feature extraction phase, the feature vector is extracted by calculating the 12 wavelet entropy values for per DHS signal and dimension of Doppler signal dataset, which are 12 features, is reduced to 6 features using PCA. In classification phase, these reduced wavelet entropy features are given to inputs ANFIS classifier. The correct diagnosis performance of the PCA-ANFIS intelligent system is calculated in 215 samples. The classification accuracy of this PCA-ANFIS intelligent system was 96% for normal subjects and 93.1% for abnormal subjects.
A comparison of feature selection models utilizing binary particle swarm optimization and genetic algorithm in determining coronary artery disease using support vector machine The aim of this study is to search the efficiency of binary particle swarm optimization (BPSO) and genetic algorithm (GA) techniques as feature selection models on determination of coronary artery disease (CAD) existence based upon exercise stress testing (EST) data. Also, increasing the classification performance of the classifier is another aim. The dataset having 23 features was obtained from patients who had performed EST and coronary angiography. Support vector machine (SVM) with k-fold cross-validation method is used as the classifier system of CAD existence in both BPSO and GA feature selection techniques. Classification results of feature selection technique using BPSO and GA are compared with each other and also with the results of the whole features using simple SVM model. The results show that feature selection technique using BPSO is more successful than feature selection technique using GA on determining CAD. Also with the new dataset composed by feature selection technique using BPSO, this study reached more accurate values of success on CAD existence research with more little complexity of classifier system and more little classification time compared with whole features used SVM.
Cardiac disorder classification by heart sound signals using murmur likelihood and hidden markov model state likelihood This study proposes a new algorithm for cardiac disorder classification by heart sound signals. The algorithm consists of three steps: segmentation, likelihood computation and classification. In the segmentation step, the authors convert heart sound signals into mel-frequency cepstral coefficient features and then partition input signals into S1/S2 intervals by using a hidden Markov model (HMM). I...
Deep Neural Networks for the Recognition and Classification of Heart Murmurs Using Neuromorphic Auditory Sensors. Auscultation is one of the most used techniques for detecting cardiovascular diseases, which is one of the main causes of death in the world. Heart murmurs are the most common abnormal finding when a patient visits the physician for auscultation. These heart sounds can either be innocent, which are harmless, or abnormal, which may be a sign of a more serious heart condition. However, the accuracy ...
Effective diagnosis of heart disease through neural networks ensembles In the last decades, several tools and various methodologies have been proposed by the researchers for developing effective medical decision support systems. Moreover, new methodologies and new tools are continued to develop and represent day by day. Diagnosing of the heart disease is one of the important issue and many researchers investigated to develop intelligent medical decision support systems to improve the ability of the physicians. In this paper, we introduce a methodology which uses SAS base software 9.1.3 for diagnosing of the heart disease. A neural networks ensemble method is in the centre of the proposed system. This ensemble based methods creates new models by combining the posterior probabilities or the predicted values from multiple predecessor models. So, more effective models can be created. We performed experiments with the proposed tool. We obtained 89.01% classification accuracy from the experiments made on the data taken from Cleveland heart disease database. We also obtained 80.95% and 95.91% sensitivity and specificity values, respectively, in heart disease diagnosis.
Information-driven dynamic sensor collaboration This article overviews the information-driven approach to sensor collaboration in ad hoc sensor networks. The main idea is for a network to determine participants in a "sensor collaboration" by dynamically optimizing the information utility of data for a given cost of communication and computation. A definition of information utility is introduced, and several approximate measures of the information utility are developed for reasons of computational tractability. We illustrate the use of this approach using examples drawn from tracking applications
Randomized gossip algorithms Motivated by applications to sensor, peer-to-peer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join and old nodes leave the network. Algorithms for such networks need to be robust against changes in topology. Additionally, nodes in sensor networks operate under limited computational, communication, and energy resources. These constraints have motivated the design of "gossip" algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for an arbitrary network graph, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient method that solves the optimization problem over the network. The relation of averaging time to the second largest eigenvalue naturally relates it to the mixing time of a random walk with transition probabilities derived from the gossip algorithm. We use this connection to study the performance and scaling of gossip algorithms on two popular networks: Wireless Sensor Networks, which are modeled as Geometric Random Graphs, and the Internet graph under the so-called Preferential Connectivity (PC) model.
Analysis of First-Order Anti-Aliasing Integration Sampler Performance of the first-order anti-aliasing integration sampler used in software-defined radio (SDR) receivers is analyzed versus all practical nonidealities. The nonidealities that are considered in this paper are transconductor finite output resistance, switch resistance, nonzero rise and fall times of the sampling clock, charge injection, clock jitter, and noise. It is proved that the filter i...
Approximately bisimilar symbolic models for nonlinear control systems Control systems are usually modeled by differential equations describing how physical phenomena can be influenced by certain control parameters or inputs. Although these models are very powerful when dealing with physical phenomena, they are less suited to describe software and hardware interfacing with the physical world. For this reason there is a growing interest in describing control systems through symbolic models that are abstract descriptions of the continuous dynamics, where each ''symbol'' corresponds to an ''aggregate'' of states in the continuous model. Since these symbolic models are of the same nature of the models used in computer science to describe software and hardware, they provide a unified language to study problems of control in which software and hardware interact with the physical world. Furthermore, the use of symbolic models enables one to leverage techniques from supervisory control and algorithms from game theory for controller synthesis purposes. In this paper we show that every incrementally globally asymptotically stable nonlinear control system is approximately equivalent (bisimilar) to a symbolic model. The approximation error is a design parameter in the construction of the symbolic model and can be rendered as small as desired. Furthermore, if the state space of the control system is bounded, the obtained symbolic model is finite. For digital control systems, and under the stronger assumption of incremental input-to-state stability, symbolic models can be constructed through a suitable quantization of the inputs.
A 60-GHz 16QAM/8PSK/QPSK/BPSK Direct-Conversion Transceiver for IEEE802.15.3c. This paper presents a 60-GHz direct-conversion transceiver using 60-GHz quadrature oscillators. The transceiver has been fabricated in a standard 65-nm CMOS process. It in cludes a receiver with a 17.3-dB conversion gain and less than 8.0-dB noise figure, a transmitter with a 18.3-dB conversion gain, a 9.5-dBm output 1 dB compression point, a 10.9-dBm saturation output power and 8.8-% power added ...
Robust compensation of a chattering time-varying input delay We investigate the design of a prediction-based controller for a linear system subject to a time-varying input delay, not necessarily causal. This means that the information feeding the system can be older than ones previously received. We propose to use the current delay value in the prediction employed in the control law. Modeling the input delay as a transport Partial Differential Equation, we prove asymptotic tracking of the system state, providing that the average ℒ2-norm of the delay time-derivative is sufficiently small. This result is obtained by generalizing Halanay inequality to time-varying differential inequalities.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.2
0.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
Self-Synchronizing Pulse Position Modulation With Error Tolerance Pulse position modulation (PPM) is a popular signal modulation technique which converts signals into $M$-ary data by means of the position of a pulse within a time interval. While PPM and its variations have great advantages in many contexts, this type of modulation is vulnerable to loss of synchronization, potentially causing a severe error floor or throughput penalty even when little or no noise is assumed. Another disadvantage is that this type of modulation typically offers no error correction mechanism on its own, making them sensitive to intersymbol interference and environmental noise. In this paper, we propose a coding theoretic variation of PPM that allows for significantly more efficient symbol and frame synchronization as well as strong error correction. The proposed scheme can be divided into a synchronization layer and a modulation layer. This makes our technique compatible with major existing techniques such as standard PPM, multipulse PPM, and expurgated PPM as well in that the scheme can be realized by adding a simple synchronization layer to one of these standard techniques. We also develop a generalization of expurgated PPM suited for the modulation layer of the proposed self-synchronizing modulation scheme. This generalized PPM can also be used as stand-alone error-correcting PPM with a larger number of available symbols.
Measurement issues in galvanic intrabody communication: influence of experimental setup Significance: The need for increasingly energyefficient and miniaturized bio-devices for ubiquitous health monitoring has paved the way for considerable advances in the investigation of techniques such as intrabody communication (IBC), which uses human tissues as a transmission medium. However, IBC still poses technical challenges regarding the measurement of the actual gain through the human body. The heterogeneity of experimental setups and conditions used together with the inherent uncertainty caused by the human body make the measurement process even more difficult. Goal: The objective of this work, focused on galvanic coupling IBC, is to study the influence of different measurement equipments and conditions on the IBC channel. Methods: different experimental setups have been proposed in order to analyze key issues such as grounding, load resistance, type of measurement device and effect of cables. In order to avoid the uncertainty caused by the human body, an IBC electric circuit phantom mimicking both human bioimpedance and gain has been designed. Given the low-frequency operation of galvanic coupling, a frequency range between 10 kHz and 1 MHz has been selected. Results: the correspondence between simulated and experimental results obtained with the electric phantom have allowed us to discriminate the effects caused by the measurement equipment. Conclusion: this study has helped us obtain useful considerations about optimal setups for galvanic-type IBC as well as to identify some of the main causes of discrepancy in the literature.
Suitable Combination of Direct Intensity Modulation and Spreading Sequence for LIDAR with Pulse Coding. In the coded pulse scanning light detection and ranging (LIDAR) system, the number of laser pulses used at a given measurement point changes depending on the modulation and the method of spreading used in optical code-division multiple access (OCDMA). The number of laser pulses determines the pulse width, output power, and duration of the pulse transmission of a measurement point. These parameters determine the maximum measurement distance of the LIDAR and the number of measurement points that can be employed per second. In this paper, we suggest possible combinations of modulation and spreading technology that can be used for OCDMA, evaluate their performance and characteristics, and study optimal combinations according to varying operating environments.
Human body communication: Channel characterization issues Human Body Communication (HBC) is a promising wireless technology that uses the human body tissues as a signal propagation medium. In HBC, the information signal is coupled to the body through an electrostatic or magnetostatic field via electrodes and is captured in another part of the body using similar electrodes. HBC has lower power consumption than conventional radio frequency (RF) approaches, because it operates at lower frequencies, usually between 0.1 MHz and 100 MHz, avoiding the body shadowing effects, complex and power hungry RF circuits and antennas. In addition, the signal is mainly confined to the human body, guaranteeing high data communication security and high efficiency in the network utilization. Designs have already been shown to achieve energy efficiency of pJ/bit and power of micro-Watts, paving the way for autonomous, energy harvested powered devices [1]. With these characteristics, HBC helps to reduce the battery volume and consequently the size and the weight of wearable devices such as watches, earphones, glasses, shoes or clothes. Overall, HBC presents itself as an interesting alternative to implement Body Sensor Networks (BSN) or Body Area Networks (BAN), especially since it is supported by the IEEE standard 802.15.6 for short-range, low-power and highly reliable wireless communication systems for use in close proximity to or within the human body [2].
The software radio architecture As communications technology continues its rapid transition from analog to digital, more functions of contemporary radio systems are implemented in software, leading toward the software radio. This article provides a tutorial review of software radio architectures and technology, highlighting benefits, pitfalls, and lessons learned. This includes a closer look at the canonical functional partitioning of channel coding into antenna, RF, IF, baseband, and bitstream segments. A more detailed look at the estimation of demand for critical resources is key. This leads to a discussion of affordable hardware configurations, the mapping of functions to component hardware, and related software tools. This article then concludes with a brief treatment of the economics and likely future directions of software radio technology
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
Why systolic architectures? First Page of the Article
Theory and Applications of Robust Optimization In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Cache attacks and countermeasures: the case of AES We describe several software side-channel attacks based on inter-process leakage through the state of the CPU’s memory cache. This leakage reveals memory access patterns, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups. The attacks allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing and virtualization. Some of our methods require only the ability to trigger services that perform encryption or MAC using the unknown key, such as encrypted disk partitions or secure network links. Moreover, we demonstrate an extremely strong type of attack, which requires knowledge of neither the specific plaintexts nor ciphertexts, and works by merely monitoring the effect of the cryptographic process on the cache. We discuss in detail several such attacks on AES, and experimentally demonstrate their applicability to real systems, such as OpenSSL and Linux’s dm-crypt encrypted partitions (in the latter case, the full key can be recovered after just 800 writes to the partition, taking 65 milliseconds). Finally, we describe several countermeasures for mitigating such attacks.
Collection and Analysis of Microprocessor Design Errors Research on practical design verification techniques has long been impeded by the lack of published, detailed error data. We have systematically collected design error data over the last few years from a number of academic microprocessor design projects. We analyzed this data and report on the lessons learned in the collection effort.
The challenges of merging two similar structured overlays: a tale of two networks Structured overlay networks is an important and interesting primitive that can be used by diverse peer-to-peer applications. Multiple overlays can result either because of network partitioning or (more likely) because different groups of peers build such overlays separately before coming in contact with each other and wishing to coalesce the overlays together. This paper is a first look into how multiple such overlays (all using the same protocols) can be merged – which is critical for usability and adoption of such an internet-scale distributed system. We elaborate how two networks using the same protocols can be merged, looking specifically into two different overlay design principles: (i) maintaining the ring invariant and (ii) structural replications, either of which are used in various overlay networks to guarantee functional correctness in a highly dynamic (membership changes) environment. Particularly, we show that ring based networks can not operate until the merger operation completes. In contrast, from the perspective of individual peers in structurally replicated overlays there is no disruption of service, and they can continue to discover and access resources that they could originally do before the beginning of the merger process, even though resources from the other network become visible only gradually with the progress of the merger process.
Digital signal processors in cellular radio communications Contemporary wireless communications are based on digital communications technologies. The recent commercial success of mobile cellular communications has been enabled in part by successful designs of digital signal processors with appropriate on-chip memories and specialized accelerators for digital transceiver operations. This article provides an overview of fixed point digital signal processors and ways in which they are used in cellular communications. Directions for future wireless-focused DSP technology developments are discussed
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.1
0.1
0.1
0.1
0.000122
0
0
0
0
0
0
0
0
0
Mitigating data leakage by protecting memory-resident sensitive data Gaining reliable arbitrary code execution through the exploitation of memory corruption vulnerabilities is becoming increasingly more difficult in the face of modern exploit mitigations. Facing this challenge, adversaries have started shifting their attention to data leakage attacks, which can lead to equally damaging outcomes, such as the disclosure of private keys or other sensitive data. In this work, we present a compiler-level defense against data leakage attacks for user-space applications. Our approach strikes a balance between the manual effort required to protect sensitive application data, and the performance overhead of achieving strong data confidentiality. To that end, we require developers to simply annotate those variables holding sensitive data, after which our framework automatically transforms only the fraction of the entire program code that is related to sensitive data operations. We implemented this approach by extending the LLVM compiler, and used it to protect memory-resident private keys in the MbedTLS server, ssh-agent, and a Libsodium-based file signing program, as well as user passwords for Lighttpd and Memcached. Our results demonstrate the feasibility and practicality of our technique: a modest runtime overhead (e.g., 13% throughput reduction for MbedTLS) that is on par with, or better than, existing state-of-the-art memory safety approaches for selective data protection.
ConfLLVM: A Compiler for Enforcing Data Confidentiality in Low-Level Code We present a compiler-based scheme to protect the confidentiality of sensitive data in low-level applications (e.g. those written in C) in the presence of an active adversary. In our scheme, the programmer marks sensitive data by lightweight annotations on the top-level definitions in the source code. The compiler then uses a combination of static dataflow analysis, runtime instrumentation, and a novel taint-aware form of control-flow integrity to prevent data leaks even in the presence of low-level attacks. To reduce runtime overheads, the compiler uses a novel memory layout. We implement our scheme within the LLVM framework and evaluate it on the standard SPEC-CPU benchmarks, and on larger, real-world applications, including the NGINX webserver and the OpenLDAP directory server. We find that the performance overheads introduced by our instrumentation are moderate (average 12% on SPEC), and the programmer effort to port the applications is minimal.
Data Space Randomization Over the past several years, US-CERT advisories, as well as most critical updates from software vendors, have been due to memory corruption vulnerabilities such as buffer overflows, heap overflows, etc. Several techniques have been developed to defend against the exploitation of these vulnerabilities, with the most promising defenses being based on randomization. Two randomization techniques have been explored so far: address space randomization (ASR) that randomizes the location of objects in virtual memory, and instruction set randomization (ISR) that randomizes the representation of code. We explore a third form of randomization called data space randomization (DSR) that randomizes the representation of data stored in program memory. Unlike ISR, DSR is effective against non-control data attacks as well as code injection attacks. Unlike ASR, it can protect against corruption of non-pointer data as well as pointer-valued data. Moreover, DSR provides a much higher range of randomization (typically 232 for 32-bit data) as compared to ASR. Other interesting aspects of DSR include (a) it does not share a weakness common to randomization-based defenses, namely, susceptibility to information leakage attacks, and (b) it is capable of detecting some exploits that are missed by full bounds-checking techniques, e.g., some of the overflows from one field of a structure to the next field. Our implementation results show that with appropriate design choices, DSR can achieve a performance overhead in the range of 5% to 30% for a range of programs.
Portable Software Fault Isolation We present a new technique for architecture portable software fault isolation (SFI), together with a prototype implementation in the Coq proof assistant. Unlike traditional SFI, which relies on analysis of assembly-level programs, we analyze and rewrite programs in a compiler intermediate language, the Cminor language of the Comp Cert C compiler. But like traditional SFI, the compiler remains outside of the trusted computing base. By composing our program transformer with the verified back-end of Comp Cert and leveraging Comp Cert's formally proved preservation of the behavior of safe programs, we can obtain binary modules that satisfy the SFI memory safety policy for any of Comp Cert's supported architectures (currently: Power PC, ARM, and x86-32). This allows the same SFI analysis to be used across multiple architectures, greatly simplifying the most difficult part of deploying trustworthy SFI systems.
EffectiveSan: type and memory error detection using dynamically typed C/C++ Low-level programming languages with weak/static type systems, such as C and C++, are vulnerable to errors relating to the misuse of memory at runtime, such as (sub-)object bounds overflows, (re)use-after-free, and type confusion. Such errors account for many security and other undefined behavior bugs for programs written in these languages. In this paper, we introduce the notion of dynamically typed C/C++, which aims to detect such errors by dynamically checking the "effective type" of each object before use at runtime. We also present an implementation of dynamically typed C/C++ in the form of the Effective Type Sanitizer (EffectiveSan). EffectiveSan enforces type and memory safety using a combination of low-fat pointers, type meta data and type/bounds check instrumentation. We evaluate EffectiveSan against the SPEC2006 benchmark suite and the Firefox web browser, and detect several new type and memory errors. We also show that EffectiveSan achieves high compatibility and reasonable overheads for the given error coverage. Finally, we highlight that EffectiveSan is one of only a few tools that can detect sub-object bounds errors, and uses a novel approach (dynamic type checking) to do so.
Thwarting Memory Disclosure with Efficient Hypervisor-enforced Intra-domain Isolation Exploiting memory disclosure vulnerabilities like the HeartBleed bug may cause arbitrary reading of a victim's memory, leading to leakage of critical secrets such as crypto keys, personal identity and financial information. While isolating code that manipulates critical secrets into an isolated execution environment is a promising countermeasure, existing approaches are either too coarse-grained to prevent intra-domain attacks, or require excessive intervention from low-level software (e.g., hypervisor or OS), or both. Further, few of them are applicable to large-scale software with millions of lines of code. This paper describes a new approach, namely SeCage, which retrofits commodity hardware virtualization extensions to support efficient isolation of sensitive code manipulating critical secrets from the remaining code. SeCage is designed to work under a strong adversary model where a victim application or even the OS may be controlled by the adversary, while supporting large-scale software with small deployment cost. SeCage combines static and dynamic analysis to decompose monolithic software into several compart- ments, each of which may contain different secrets and their corresponding code. Following the idea of separating control and data plane, SeCage retrofits the VMFUNC mechanism and nested paging in Intel processors to transparently provide different memory views for different compartments, while allowing low-cost and transparent invocation across domains without hypervisor intervention. We have implemented SeCage in KVM on a commodity Intel machine. To demonstrate the effectiveness of SeCage, we deploy it to the Nginx and OpenSSH server with the OpenSSL library as well as CryptoLoop with small efforts. Security evaluation shows that SeCage can prevent the disclosure of private keys from HeartBleed attacks and memory scanning from rootkits. The evaluation shows that SeCage only incurs small performance and space overhead.
The CHERI capability model: revisiting RISC in an age of risk Motivated by contemporary security challenges, we reevaluate and refine capability-based addressing for the RISC era. We present CHERI, a hybrid capability model that extends the 64-bit MIPS ISA with byte-granularity memory protection. We demonstrate that CHERI enables language memory model enforcement and fault isolation in hardware rather than software, and that the CHERI mechanisms are easily adopted by existing programs for efficient in-program memory safety. In contrast to past capability models, CHERI complements, rather than replaces, the ubiquitous page-based protection mechanism, providing a migration path towards deconflating data-structure protection and OS memory management. Furthermore, CHERI adheres to a strict RISC philosophy: it maintains a load-store architecture and requires only singlecycle instructions, and supplies protection primitives to the compiler, language runtime, and operating system. We demonstrate a mature FPGA implementation that runs the FreeBSD operating system with a full range of software and an open-source application suite compiled with an extended LLVM to use CHERI memory protection. A limit study compares published memory safety mechanisms in terms of instruction count and memory overheads. The study illustrates that CHERI is performance-competitive even while providing assurance and greater flexibility with simpler hardware
Achievable rates in cognitive radio channels Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.
Reaching Agreement in the Presence of Faults The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor.It is shown that the problem is solvable for, and only for, n ≥ 3m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0. This weaker assumption can be approximated in practice using cryptographic methods.
Incremental Stochastic Subgradient Algorithms for Convex Optimization This paper studies the effect of stochastic errors on two constrained incremental subgradient algorithms. The incremental subgradient algorithms are viewed as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. First, the standard cyclic incremental subgradient algorithm is studied. In this, the agents form a ring structure and pass the iterate in a cycle. When there are stochastic errors in the subgradient evaluations, sufficient conditions on the moments of the stochastic errors are obtained that guarantee almost sure convergence when a diminishing step-size is used. In addition, almost sure bounds on the algorithm's performance with a constant step-size are also obtained. Next, the Markov randomized incremental subgradient method is studied. This is a noncyclic version of the incremental algorithm where the sequence of computing agents is modeled as a time nonhomogeneous Markov chain. Such a model is appropriate for mobile networks, as the network topology changes across time in these networks. Convergence results and error bounds for the Markov randomized method in the presence of stochastic errors for diminishing and constant step-sizes are obtained.
Reconstruction of Nonuniformly Sampled Bandlimited Signals Using a Differentiator–Multiplier Cascade This paper considers the problem of reconstructing a bandlimited signal from its nonuniform samples. Based on a discrete-time equivalent model for nonuniform sampling, we propose the differentiator-multiplier cascade, a multistage reconstruction system that recovers the uniform samples from the nonuniform samples. Rather than using optimally designed reconstruction filters, the system improves the...
A decentralized modular control framework for robust control of FES-activated walker-assisted paraplegic walking using terminal sliding mode and fuzzy logic control. A major challenge to developing functional electrical stimulation (FES) systems for paraplegic walking and widespread acceptance of these systems is the design of a robust control strategy that provides satisfactory tracking performance. The systems need to be robust against time-varying properties of neuromusculoskeletal dynamics, day-to-day variations, subject-to-subject variations, external dis...
PUMP: a programmable unit for metadata processing We introduce the Programmable Unit for Metadata Processing (PUMP), a novel software-hardware element that allows flexible computation with uninterpreted metadata alongside the main computation with modest impact on runtime performance (typically 10--40% for single policies, compared to metadata-free computation on 28 SPEC CPU2006 C, C++, and Fortran programs). While a host of prior work has illustrated the value of ad hoc metadata processing for specific policies, we introduce an architectural model for extensible, programmable metadata processing that can handle arbitrary metadata and arbitrary sets of software-defined rules in the spirit of the time-honored 0-1-∞ rule. Our results show that we can match or exceed the performance of dedicated hardware solutions that use metadata to enforce a single policy, while adding the ability to enforce multiple policies simultaneously and achieving flexibility comparable to software solutions for metadata processing. We demonstrate the PUMP by using it to support four diverse safety and security policies---spatial and temporal memory safety, code and data taint tracking, control-flow integrity including return-oriented-programming protection, and instruction/data separation---and quantify the performance they achieve, both singly and in combination.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1.1
0.1
0.1
0.1
0.1
0.05
0.02
0
0
0
0
0
0
0
A Power-Efficient Hybrid Single-Inductor Bipolar-Output DC-DC Converter with Floating Negative Output for AMOLED Displays This paper presents a hybrid single-inductor bipolar-output (SIBO) DC-DC converter for active-matrix organic light-emitting diode (AMOLED) displays which are relatively more sensitive to the supply noises on their positive supply. This design significantly improves the display quality by achieving a near-zero voltage ripple at the positive output thanks to the negative output floating and the use of low-power shunt regulators. In addition, with the hybrid topology and the proposed cross-coupled bootstrap-based level-shifter with a dual-PMOS inverter buffer, low-voltage devices without deep-N-well are used, reducing the chip area and cost. The proposed converter is implemented in a 0.35-μm CMOS process with 5-V devices. The targeted output voltages are 5.3V and -4.7V. Operating at 1MHz, the measured positive output ripple is lower than 1mV in all the conditions. The measured peak power efficiency is 89.3% at 1.1W output power. The maximum output power is 3.5W.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Parallel-RC Feedback Low-Noise Amplifier for UWB Applications A two-stage 3.1- to 10.6-GHz ultrawideband CMOS low-noise amplifier (LNA) is presented. In our design, a parallel resistance-capacitance shunt feedback with a source inductance is proposed to obtain broadband input matching and to reduce the noise level effectively; furthermore, a parallel inductance-capacitance network at drain is drawn to further suppress the noise, and a very low noise level is achieved. The proposed LNA is implemented by the Taiwan Semiconductor Manufacturing Company 0.18-μm CMOS technology. Measured results show that the noise figure is 2.5-4.7 dB from 3.1 to 10.6 GHz, which may be the best result among previous reports in the 0.18-μm CMOS 3.1- to 10.6-GHz ultrawideband LNA. The power gain is 10.9-13.9 dB from 3.1 to 10.6 GHz. The input return loss is below -9.4 dB from 3.1 to 15 GHz. It consumes 14.4 mW from a 1.4-V supply voltage and occupies an area of only 0.46 mm2.
Signal Processing Challenges For Applying Software Radio Principles In Future Wireless Terminals: An Overview The general idea of software radio is to develop highly integrated radio transceiver structures with high degree of flexibility and multimode capabilities, achieved through increased role of digital signal processing software in defining the functionalities which have traditionally been implemented with analog RF techniques. This paper explores the software radio concept from the receiver architecture and signal processing points of view, with mainly the wireless terminal application in mind. We first discuss the critical issues in alternative receiver architectures with simplified analog parts and increased configurability. We also introduce certain advanced digital signal processing techniques which could potentially relieve some of the essential problems and pave the way towards DSP-based, highly integrated, and highly configurable terminals. Big emphasis is on efficient digital multirate signal processing methods and complex (I/Q) signal processing. Copyright (C) 2002 John Wiley Sons, Ltd.
A new CMOS wideband low noise amplifier with gain control In this paper, a new CMOS wideband low noise amplifier (LNA) is proposed that is operated within a range of 470MHz-3GHz with current reuse, mirror bias and a source inductive degeneration technique. A two-stage topology is adopted to implement the LNA based on the TSMC 0.18-@mm RF CMOS process. Traditional wideband LNAs suffer from a fundamental trade-off in noise figure (NF), gain and source impedance matching. Therefore, we propose a new LNA which obtains good NF and gain flatness performance by integrating two kinds of wideband matching techniques and a two-stage topology. The new LNA can also achieve a tunable gain at different power consumption conditions. The measurement results at the maximum power consumption mode show that the gain is between 11.3 and 13.6dB, the NF is less than 2.5dB, and the third-order intercept point (IIP3) is about -3.5dBm. The LNA consumes maximum power at about 27mW with a 1.8V power supply. The core area is 0.55x0.95mm^2.
The Blixer, a Wideband Balun-LNA-I/Q-Mixer Topology This paper proposes to merge an I/Q current-commutating mixer with a noise-canceling balun-LNA. To realize a high bandwidth, the real part of the impedance of all RF nodes is kept low, and the voltage gain is not created at RF but in baseband where capacitive loading is no problem. Thus a high RF bandwidth is achieved without using inductors for bandwidth extension. By using an I/Q mixer with 25% duty-cycle LO waveform the output IF currents have also 25% duty-cycle, causing 2 times smaller DC-voltage drop after IF filtering. This allows for a 2 times increase in the impedance level of the IF filter, rendering more voltage gain for the same supply headroom. The implemented balun-LNA-I/Q-mixer topology achieves > 18 dB conversion gain, a flat noise figure < 5.5 dB from 500 MHz to 7 GHz, IIP2 = +20 dBm and IIP3 = -3 dBm. The core circuit consumes only 16 mW from a 1.2 V supply voltage and occupies less than 0.01 mm2 in 65 nm CMOS.
Design and Analysis of a Performance-Optimized CMOS UWB Distributed LNA In this paper, the systematic design and analysis of a CMOS performance-optimized distributed low-noise amplifier (DLNA) comprising bandwidth-enhanced cascode cells will be presented. Each cascode cell employs an inductor between the common-source and common-gate devices to enhance the bandwidth, while reducing the high-frequency input-referred noise. The noise analysis and optimization of the DLN...
The path to the software-defined radio receiver After being the subject of speculation for many years, a software-defined radio receiver concept has emerged that is suitable for mobile handsets. A key step forward is the realization that in mobile handsets, it is enough to receive one channel with any bandwidth, situated in any band. Thus, the front-end can be tuned electronically. Taking a cue from a digital front-end, the receiver&#39;s flexible ...
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Merged Two-Stage Power Converter With Soft Charging Switched-Capacitor Stage in 180 nm CMOS In this paper, we introduce a merged two-stage dc-dc power converter for low-voltage power delivery. By separating the transformation and regulation function of a dc-dc power converter into two stages, both large voltage transformation and high switching frequency can be achieved. We show how the switched-capacitor stage can operate under soft charging conditions by suitable control and integration (merging) of the two stages. This mode of operation enables improved efficiency and/or power density in the switched-capacitor stage. A 5-to-1 V, 0.8 W integrated dc-dc converter has been developed in 180 nm CMOS. The converter achieves a peak efficiency of 81%, with a regulation stage switching frequency of 10 MHz.
Disk Paxos We present an algorithm, called Disk Paxos, for implementing a reliable distributed system with a network of processors and disks. Like the original Paxos algorithm, Disk Paxos maintains consistency in the presence of arbitrary non-Byzantine faults. Progress can be guaranteed as long as a majority of the disks are available, even if all processors but one have failed.
A theory of nonsubtractive dither A detailed mathematical investigation of multibit quantizing systems using nonsubtractive dither is presented. It is shown that by the use of dither having a suitably chosen probability density function, moments of the total error can be made independent of the system input signal but that statistical independence of the error and the input signals is not achievable. Similarly, it is demonstrated that values of the total error signal cannot generally be rendered statistically independent of one another but that their joint moments can be controlled and that, in particular, the error sequence can be rendered spectrally white. The properties of some practical dither signals are explored, and recommendations are made for dithering in audio, video, and measurement applications. The paper collects all of the important results on the subject of nonsubtractive dithering and introduces important new ones with the goal of alleviating persistent and widespread misunderstandings regarding the technique
Analysis and Optimum Design of a Class E RF Power Amplifier A new analysis of a class E power amplifier is presented and a fully analytic design approach is developed. Using our analysis, all of the circuit currents and voltages and, hence, the power dissipation in each component is calculated as a function of a key design parameter, denoted by x. This parameter is the ratio of the resonance frequency of the shunt inductor and shunt capacitor to the operat...
Digital signal processors in cellular radio communications Contemporary wireless communications are based on digital communications technologies. The recent commercial success of mobile cellular communications has been enabled in part by successful designs of digital signal processors with appropriate on-chip memories and specialized accelerators for digital transceiver operations. This article provides an overview of fixed point digital signal processors and ways in which they are used in cellular communications. Directions for future wireless-focused DSP technology developments are discussed
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.071111
0.066667
0.022222
0.013333
0.00506
0.001961
0
0
0
0
0
0
0
0
The Tactile Internet: Applications and Challenges Wireless communications today enables us to connect devices and people for an unprecedented exchange of multimedia and data content. The data rates of wireless communications continue to increase, mainly driven by innovation in electronics. Once the latency of communication systems becomes low enough to enable a round-trip delay from terminals through the network back to terminals of approximately 1 ms, an overlooked breakthrough?human tactile to visual feedback control?will change how humans communicate around the world. Using these controls, wireless communications can be the platform for enabling the control and direction of real and virtual objects in many situations of our life. Almost no area of the economy will be left untouched, as this new technology will change health care, mobility, education, manufacturing, smart grids, and much more. The Tactile Internet will become a driver for economic growth and innovation and will help bring a new level of sophistication to societies.
Perception-Based Data Reduction and Transmission of Haptic Data in Telepresence and Teleaction Systems We present a novel approach for the transmission of haptic data in telepresence and teleaction systems. The goal of this work is to reduce the packet rate between an operator and a teleoperator without impairing the immersiveness of the system. Our approach exploits the properties of human haptic perception and is, more specifically, based on the concept of just noticeable differences. In our scheme, updates of the haptic amplitude values are signaled across the network only if the change of a haptic stimulus is detectable by the human operator. We investigate haptic data communication for a 1 degree-of-freedom (DoF) and a 3 DoF teleaction system. Our experimental results show that the presented approach is able to reduce the packet rate between the operator and teleoperator by up to 90% of the original rate without affecting the performance of the system.
Design of a Pressure Control System With Dead Band and Time Delay This paper investigates the control of pressure in a hydraulic circuit containing a dead band and a time varying delay. The dead band is considered as a linear term and a perturbation. A sliding mode controller is designed. Stability conditions are established by making use of Lyapunov Krasovskii functionals, non-perfect time delay estimation is studied and a condition for the effect of uncertainties on the dead zone on stability is derived. Also the effect of different LMI formulations on conservativeness is studied. The control law is tested in practice.
Lossy data compression using FDCT for haptic communication In this paper, a DCT-based lossy haptic data compression method for a haptic communication systems is proposed to reduce the data size flowing between a master and a slave system. The calculation load for the DCT can be high, and the performance and the stability of the system can deteriorate due to the high calculation load. In order to keep the system a hard real-time system and the performance high, a fast calculation algorithm for DCT is adopted, and the calculation load is balanced for several sampling periods. The time delay introduced through the compression/expansion of the haptic data is predictable and constant. The time delay, therefore, can be compensated by a time delay compensator. Furthermore, since the delay in this paper is small enough, stable contact with a hard environment is achieved without any time delay compensator. The validity of the proposed lossy haptic data compression method is shown through simulation and experimental results.
A Distributed Dynamic Event-Triggered Control Approach to Consensus of Linear Multiagent Systems With Directed Networks. In this paper, we study the consensus problem for a class of linear multiagent systems, where the communication networks are directed. First, a dynamic event-triggering mechanism is introduced, including some existing static event-triggering mechanisms as its special cases. Second, based on the dynamic event-triggering mechanism, a distributed control protocol is developed, which ensures that all agents can reach consensus with an exponential convergence rate. Third, it is shown that, with the dynamic event-triggering mechanism, the minimum interevent time between any two consecutive triggering instants can be prolonged and no agent exhibits Zeno behavior. Finally, an algorithm is provided to avoid continuous communication when the dynamic event-triggering mechanism is implemented. The effectiveness of the results is confirmed through a numerical example.
On QUAD, Lipschitz, and Contracting Vector Fields for Consensus and Synchronization of Networks. In this paper, a relationship is discussed between three common assumptions made in the literature to prove local or global asymptotic stability of the synchronization manifold in networks of coupled nonlinear dynamical systems. In such networks, each node, when uncoupled, is described by a nonlinear ordinary differential equation of the form ẋ = f (x,t) . In this paper, we establish links between...
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
Design Techniques for Fully Integrated Switched-Capacitor DC-DC Converters. This paper describes design techniques to maximize the efficiency and power density of fully integrated switched-capacitor (SC) DC-DC converters. Circuit design methods are proposed to enable simplified gate drivers while supporting multiple topologies (and hence output voltages). These methods are verified by a proof-of-concept converter prototype implemented in 0.374 mm2 of a 32 nm SOI process. ...
Distributed reset A reset subsystem is designed that can be embedded in an arbitrary distributed system in order to allow the system processes to reset the system when necessary. Our design is layered, and comprises three main components: a leader election, a spanning tree construction, and a diffusing computation. Each of these components is self-stabilizing in the following sense: if the coordination between the up-processes in the system is ever lost (due to failures or repairs of processes and channels), then each component eventually reaches a state where coordination is regained. This capability makes our reset subsystem very robust: it can tolerate fail-stop failures and repairs of processes and channels, even when a reset is in progress
Winnowing: local algorithms for document fingerprinting Digital content is for copying: quotation, revision, plagiarism, and file sharing all create copies. Document fingerprinting is concerned with accurately identifying copying, including small partial copies, within large sets of documents.We introduce the class of local document fingerprinting algorithms, which seems to capture an essential property of any finger-printing technique guaranteed to detect copies. We prove a novel lower bound on the performance of any local algorithm. We also develop winnowing, an efficient local fingerprinting algorithm, and show that winnowing's performance is within 33% of the lower bound. Finally, we also give experimental results on Web data, and report experience with MOSS, a widely-used plagiarism detection service.
Yet another MicroArchitectural Attack:: exploiting I-Cache MicroArchitectural Attacks (MA), which can be considered as a special form of Side-Channel Analysis, exploit microarchitectural functionalities of processor implementations and can compromise the security of computational environments even in the presence of sophisticated protection mechanisms like virtualization and sandboxing. This newly evolving research area has attracted significant interest due to the broad application range and the potentials of these attacks. Cache Analysis and Branch Prediction Analysis were the only types of MA that had been known publicly. In this paper, we introduce Instruction Cache (I-Cache) as yet another source of MA and present our experimental results which clearly prove the practicality and danger of I-Cache Attacks.
Fully Integrated CMOS Power Amplifier With Efficiency Enhancement at Power Back-Off This paper presents a new approach for power amplifier design using deep submicron CMOS technologies. A transformer based voltage combiner is proposed to combine power generated from several low-voltage CMOS amplifiers. Unlike other voltage combining transformers, the architecture presented in this paper provides greater flexibility to access and control the individual amplifiers in a voltage comb...
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
A DVB-H receiver and gateway implementation on a FPGA- and DSP-based platform. DVB-H is a mobile TV broadcasting system that is being deployed in many countries around the world. Although specific terminals do exist, it is not very common for mobile terminals (e.g. smartphones or tablets) to be DVB-H enabled. Therefore, it might be desirable to have some kind of device that could retransmit the DVB-H services using a more established system such as WiFi. A prototype system, ...
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
PYXIS: An Open-Source Performance Dataset Of Sparse Accelerators Customized accelerators provide gains of performance and efficiency in specific domains of applications. Sparse data structures and/or representations exist in a wide range of applications. However, it is challenging to design accelerators for sparse applications because no architecture or performance-level analytic models are able to fully capture the spectrum of the sparse data. Accelerator researchers rely on real execution to get precise feedback for their designs. In this work, we present PYXIS, a performance dataset for customized accelerators on sparse data. PYXIS collects accelerator designs and real execution performance statistics. Currently, there are 73.8 K instances in PYXIS. PYXIS is open-source, and we are constantly growing PYXIS with new accelerator designs and performance statistics. PYXIS can be a benefit to researchers in the fields of accelerator, architecture, performance, algorithm and many related topics.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Survey of P2P Virtual World Infrastructure With the development of computer science and virtual reality technology, virtual world evolves along the way of computer game development, from arcade games, console system games, LAN games, Internet connectivity games, unstructured games, games with player generation of content, worlds with designer-provided objectives, games with social networks, and open virtual worlds [1]. Traditional server-client structure does not scale well at least in the following three aspects which are limited number of players in each server, single point of failure risk, and unbalanced computation resource. This survey investigates another alternative, peer-to-peer (P2P) virtual world software infrastructure, to address these traditional architectural issues.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Design Methodology of a Dual-Halbach Array Linear Actuator with Thermal-Electromagnetic Coupling. This paper proposes a design methodology for linear actuators, considering thermal and electromagnetic coupling with geometrical and temperature constraints, that maximizes force density and minimizes force ripple. The method allows defining an actuator for given specifications in a step-by-step way so that requirements are met and the temperature within the device is maintained under or equal to its maximum allowed for continuous operation. According to the proposed method, the electromagnetic and thermal models are built with quasi-static parametric finite element models. The methodology was successfully applied to the design of a linear cylindrical actuator with a dual quasi-Halbach array of permanent magnets and a moving-coil. The actuator can produce an axial force of 120 N and a stroke of 80 mm. The paper also presents a comparative analysis between results obtained considering only an electromagnetic model and the thermal-electromagnetic coupled model. This comparison shows that the final designs for both cases differ significantly, especially regarding its active volume and its electrical and magnetic loading. Although in this paper the methodology was employed to design a specific actuator, its structure can be used to design a wide range of linear devices if the parametric models are adjusted for each particular actuator.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Analysis of Distributed Random Grouping for Aggregate Computation on Wireless Sensor Networks with Randomly Changing Graphs Dynamical connection graph changes are inherent in networks such as peer-to-peer networks, wireless ad hoc networks, and wireless sensor networks. Considering the influence of the frequent graph changes is thus essential for precisely assessing the performance of applications and algorithms on such networks. In this paper, using stochastic hybrid systems (SHSs), we model the dynamics and analyze the performance of an epidemic-like algorithm, distributed random grouping (DRG), for average aggregate computation on a wireless sensor network with dynamical graph changes. Particularly, we derive the convergence criteria and the upper bounds on the running time of the DRG algorithm for a set of graphs that are individually disconnected but jointly connected in time. An effective technique for the computation of a key parameter in the derived bounds is also developed. Numerical results and an application extended from our analytical results to control the graph sequences are presented to exemplify our analysis.
A survey on routing protocols for wireless sensor networks Recent advances in wireless sensor networks have led to many new protocols specifically designed for sensor networks where energy awareness is an essential consideration. Most of the attention, however, has been given to the routing protocols since they might differ depending on the application and network architecture. This paper surveys recent routing protocols for sensor networks and presents a classification for the various approaches pursued. The three main categories explored in this paper are data-centric, hierarchical and location-based. Each routing protocol is described and discussed under the appropriate category. Moreover, protocols using contemporary methodologies such as network flow and quality of service modeling are also discussed. The paper concludes with open research issues.
Robust Aggregation in Sensor Networks In the emerging area of sensor-based systems, a significant c hallenge is to develop scalable, fault-tolerant methods to extract useful information from the data the sensors coll ect. An approach to this data management problem is the use of sensor "database" systems, which allow users to perfo rm aggregation queries on the readings of a sensor network. Due to power and range constraints, centralized ap proaches are generally impractical, so most systems use in-network aggregation to reduce network traffic. Howev er, these aggregation strategies become bandwidth- intensive when combined with the fault-tolerant, multi-pa th routing methods often used in these environments. In order to avoid this expense, we investigate the use ofapproximate in-network aggregation using small sketches and we survey robust and scalable methods for computing dupl icate-sensitive aggregates.
Directed diffusion for wireless sensor networking Advances in processor, memory, and radio technology will enable small and cheap nodes capable of sensing, communication, and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed-diffusion paradigm for such coordination. Directed diffusion is data-centric in that all communication is for named data. All nodes in a directed-diffusion-based network are application aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network (e.g., data aggregation). We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network analytically and experimentally. Our evaluation indicates that directed diffusion can achieve significant energy savings and can outperform idealized traditional schemes (e.g., omniscient multicast) under the investigated scenarios.
Initializing newly deployed ad hoc and sensor networks A newly deployed multi-hop radio network is unstructured and lacks a reliable and efficient communication scheme. In this paper, we take a step towards analyzing the problems existing during the initialization phase of ad hoc and sensor networks. Particularly, we model the network as a multi-hop quasi unit disk graph and allow nodes to wake up asynchronously at any time. Further, nodes do not feature a reliable collision detection mechanism, and they have only limited knowledge about the network topology. We show that even for this restricted model, a good clustering can be computed efficiently. Our algorithm efficiently computes an asymptotically optimal clustering. Based on this algorithm, we describe a protocol for quickly establishing synchronized sleep and listen schedule between nodes within a cluster. Additionally, we provide simulation results in a variety of settings.
Geographic Gossip: Efficient Averaging for Sensor Networks Gossip algorithms for distributed computation are attract ive due to their simplicity, distributed nature, and robust ness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repea tedly recirculating redundant information. For realistic senso r network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing t imes of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of n and p n respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy ǫ using O( n 1.5 p log n log ǫ 1) radio transmissions, which yields a q n log n factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental
The price of validity in dynamic networks Massive-scale self-administered networks like Peer-to-Peer and Sensor Networks have data distributed across thousands of participant hosts. These networks are highly dynamic with short-lived hosts being the norm rather than an exception. In recent years, researchers have investigated best-effort algorithms to efficiently process aggregate queries (e.g., sum, count, average, minimum and maximum) [6, 13, 21, 34, 35, 37] on these networks. Unfortunately, query semantics for best-effort algorithms are ill-defined, making it hard to reason about guarantees associated with the result returned. In this paper, we specify a correctness condition, single-site validity, with respect to which the above algorithms are best-effort. We present a class of algorithms that guarantee validity in dynamic networks. Experiments on real-life and synthetic network topologies validate performance of our algorithms, revealing the hitherto unknown price of validity.
Information Spreading in Stationary Markovian Evolving Graphs Markovian evolving graphs are dynamic-graph models where the links among a fixed set of nodes change during time according to an arbitrary Markovian rule. They are extremely general and they can well describe important dynamic-network scenarios. We study the speed of information spreading in the stationary phase by analyzing the completion time of the flooding mechanism. We prove a general theorem that establishes an upper bound on flooding time in any stationary Markovian evolving graph in terms of its node-expansion properties. We apply our theorem in two natural and relevant cases of such dynamic graphs. Geometric Markovian evolving graphs where the Markovian behaviour is yielded by n mobile radio stations, with fixed transmission radius, that perform independent random walks over a square region of the plane. Edge-Markovian evolving graphs where the probability of existence of any edge at time t depends on the existence (or not) of the same edge at time t-1. In both cases, the obtained upper bounds hold with high probability and they are nearly tight. In fact, they turn out to be tight for a large range of the values of the input parameters. As for geometric Markovian evolving graphs, our result represents the first analytical upper bound for flooding time on a class of concrete mobile networks.
Achievable rates in cognitive radio channels Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.
A study of phase noise in CMOS oscillators This paper presents a study of phase noise in two inductorless CMOS oscillators. First-order analysis of a linear oscillatory system leads to a noise shaping function and a new definition of . A linear model of CMOS ring oscillators is used to calculate their phase noise, and three phase noise phenomena, namely, additive noise, high-frequency multiplicative noise, and low-frequency multiplicative noise, are identified and formulated. Based on the same concepts, a CMOS relaxation oscillator is also analyzed. Issues and techniques related to simulation of noise in the time domain are described, and two prototypes fabricated in a 0.5- m CMOS technology are used to investigate the accuracy of the theoretical predictions. Compared with the measured results, the calculated phase noise values of a 2-GHz ring oscillator and a 900-MHz relaxation oscillator at 5 MHz offset have an error of approximately 4 dB. OLTAGE-CONTROLLED oscillators (VCO's) are an integral part of phase-locked loops, clock recovery cir- cuits, and frequency synthesizers. Random fluctuations in the output frequency of VCO's, expressed in terms of jitter and phase noise, have a direct impact on the timing accuracy where phase alignment is required and on the signal-to-noise ratio where frequency translation is performed. In particular, RF oscillators employed in wireless tranceivers must meet stringent phase noise requirements, typically mandating the use of passive LC tanks with a high quality factor . However, the trend toward large-scale integration and low cost makes it desirable to implement oscillators monolithically. The paucity of literature on noise in such oscillators together with a lack of experimental verification of underlying theories has motivated this work. This paper provides a study of phase noise in two induc- torless CMOS VCO's. Following a first-order analysis of a linear oscillatory system and introducing a new definition of , we employ a linearized model of ring oscillators to obtain an estimate of their noise behavior. We also describe the limitations of the model, identify three mechanisms leading to phase noise, and use the same concepts to analyze a CMOS relaxation oscillator. In contrast to previous studies where time-domain jitter has been investigated (1), (2), our analysis is performed in the frequency domain to directly determine the phase noise. Experimental results obtained from a 2-GHz ring oscillator and a 900-MHz relaxation oscillator indicate that, despite many simplifying approximations, lack of accurate MOS models for RF operation, and the use of simple noise
An architecture for survivable coordination in large distributed systems Coordination among processes in a distributed system can be rendered very complex in a large-scale system where messages may be delayed or lost and when processes may participate only transiently or behave arbitrarily, e.g., after suffering a security breach. In this paper, we propose a scalable architecture to support coordination in such extreme conditions. Our architecture consists of a collection of persistent data servers that implement simple shared data abstractions for clients, without trusting the clients or even the servers themselves. We show that, by interacting with these untrusted servers, clients can solve distributed consensus, a powerful and fundamental coordination primitive. Our architecture is very practical and we describe the implementation of its main components in a system called Fleet.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
Understanding contention-based channels and using them for defense Microarchitectural resources such as caches and predictors can be used to leak information across security domains. Significant prior work has demonstrated attacks and defenses for specific types of such microarchitectural side and covert channels. In this paper, we introduce a general mathematical study of microarchitectural channels using information theory. Our conceptual contribution is a simple mathematical abstraction that captures the common characteristics of all microarchitectural channels. We call this the Bucket model and it reveals that microarchitectural channels are fundamentally different from side and covert channels in networking. We then quantify the communication capacity of several microarchitectural covert channels (including channels that rely on performance counters, AES hardware and memory buses) and measure bandwidths across both KVM based heavy-weight virtualization and light-weight operating-system level isolation. We demonstrate channel capacities that are orders of magnitude higher compared to what was previously considered possible. Finally, we introduce a novel way of detecting intelligent adversaries that try to hide while running covert channel eavesdropping attacks. Our method generalizes a prior detection scheme (that modeled static adversaries) by introducing noise that hides the detection process from an intelligent eavesdropper.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.126421
0.14369
0.126421
0.108361
0.074238
0.052446
0.003497
0.000297
0
0
0
0
0
0
Efficient dithering in MASH sigma-delta modulators for fractional frequency synthesizers The digital multistage-noise-shaping (MASH) ΣΔ modulators used in fractional frequency synthesizers are prone to spur tone generation in their output spectrum. In this paper, the state of the art on spur-tone-magnitude reduction is used to demonstrate that an M-bit MASH architecture dithered by a simple M-bit linear feedback shift register (LFSR) can be as effective as more sophisticated topologies if the dither signal is properly added. A comparison between the existent digital ΣΔ modulators used in fractional synthesizers is presented to demonstrate that the MASH architecture has the best tradeoff between complexity and quantization noise shaping, but they present spur tones. The objective of this paper was to significantly decrease the area of the circuit used to reduce the spur tone magnitude for these MASH topologies. The analysis is validated with a theoretical study of the paths where the dither signal can be added. Experimental results of a digital M-bit MASH 1-1-1 ΣΔ modulator with the proposed way to add the LFSR dither are presented to make a hardware comparison.
Spurious tones in digital delta sigma modulators with pseudorandom dither Pseudorandom dither generators are widely used to break up periodic cycles in digital delta sigma modulators in order to minimize spurious tones produced by underlying periodic behavior. Unfortunately, pseudorandom dither signals are themselves periodic and therefore can have limited effectiveness. This paper identifies some limitations of using pseudorandom dither signals that are inherently periodic.
Prediction of the Spectrum of a Digital Delta–Sigma Modulator Followed by a Polynomial Nonlinearity This paper presents a mathematical analysis of the power spectral density of the output of a nonlinear block driven by a digital delta-sigma modulator. The nonlinearity is a memoryless third-order polynomial with real coefficients. The analysis yields expressions that predict the noise floor caused by the nonlinearity when the input is constant.
Understanding Phase Error and Jitter: Definitions, Implications, Simulations, and Measurement. Precision oscillators are ubiquitous in modern electronic systems, and their accuracy often limits the performance of such systems. Hence, a deep understanding of how oscillator performance is quantified, simulated, and measured, and how it affects the system performance is essential for designers. Unfortunately, the necessary information is spread thinly across the published literature and textbo...
Spurious tones in digital delta-sigma modulators resulting from pseudorandom dither Digital delta-sigma modulators (DDSMs) are finite state machines; their spectra are characterized by strong periodic tones (so-called spurs) when they cycle repeatedly in time through a small number of states. This happens when the input is constant or periodic. Pseudorandom dither generators are widely used to break up periodic cycles in DDSMs in order to eliminate spurs produced by underlying periodic behavior. Unfortunately, pseudorandom dither signals are themselves periodic and therefore can have limited effectiveness. This paper addresses the fundamental limitations of using pseudorandom dither signals that are inherently periodic. We clarify some common misunderstandings in the DDSM literature. We present rigorous mathematical analysis, case studies to illustrate the issues, and insights which can prove useful in design.
Prediction of Phase Noise and Spurs in a Nonlinear Fractional- ${N}$ Frequency Synthesizer Integer boundary spurs appear in the passband of the loop response of fractional- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${N}$ </tex-math></inline-formula> phase lock loops and are, therefore, a potentially significant component of the phase noise. In spite of measures guaranteeing spur-free modulator outputs, the interaction of the modulation noise from a divider controller with inevitable loop nonlinearities produces such spurs. This paper presents analytical predictions of the locations and amplitudes of the spurs and accompanying noise floor levels produced by interaction between a divider controller output and a PLL loop with a static nonlinearity. A key finding is that the spur locations and amplitudes can be estimated by using only the knowledge of the structure and pdf of the accumulated modulator noise and the nonlinearity. These predictions also offer new insights into why the spurs appear.
A Digital Requantizer With Shaped Requantization Noise That Remains Well Behaved After Nonlinear Distortion A major problem in oversampling digital-to-analog converters and fractional-N frequency synthesizers, which are ubiquitous in modern communication systems, is that the noise they introduce contains spurious tones. The spurious tones are the result of digitally generated, quantized signals passing through nonlinear analog components. This paper presents a new method of digital requantization called successive requantization, special cases of which avoids the spurious tone generation problem. Sufficient conditions are derived that ensure certain statistical properties of the quantization noise, including the absence of spurious tones after nonlinear distortion. A practical example is presented and shown to satisfy these conditions.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Cellular Logic-in-Memory Arrays As a direct consequence of large-scale integration, many advantages in the design, fabrication, testing, and use of digital circuitry can be achieved if the circuits can be arranged in a two-dimensional iterative, or cellular, array of identical elementary networks, or cells. When a small amount of storage is included in each cell, the same array may be regarded either as a logically enhanced memory array, or as a logic array whose elementary gates and connections can be "programmed" to realize a desired logical behavior.
Formal verification in hardware design: a survey In recent years, formal methods have emerged as an alternative approach to ensuring the quality and correctness of hardware designs, overcoming some of the limitations of traditional validation techniques such as simulation and testing.There are two main aspects to the application of formal methods in a design process: the formal framework used to specify desired properties of a design and the verification techniques and tools used to reason about the relationship between a specification and a corresponding implementation. We survey a variety of frameworks and techniques proposed in the literature and applied to actual designs. The specification frameworks we describe include temporal logics, predicate logic, abstraction and refinement, as well as containment between &ohgr;-regular languages. The verification techniques presented include model checking, automata-theoretic techniques, automated theorem proving, and approaches that integrate the above methods.In order to provide insight into the scope and limitations of currently available techniques, we present a selection of case studies where formal methods were applied to industrial-scale designs, such as microprocessors, floating-point hardware, protocols, memory subsystems, and communications hardware.
Constrained Consensus and Optimization in Multi-Agent Networks We present distributed algorithms that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity. Our framework is general in that this value can represent a consensus value among multiple agents or an optimal solution of an optimization problem, where the global objective function is a combination of local agent objective functions. Our main focus is on constrained problems where the estimate of each agent is restricted to lie in a different constraint set. To highlight the effects of constraints, we first consider a constrained consensus problem and present a distributed ``projected consensus algorithm'' in which agents combine their local averaging operation with projection on their individual constraint sets. This algorithm can be viewed as a version of an alternating projection method with weights that are varying over time and across agents. We establish convergence and convergence rate results for the projected consensus algorithm. We next study a constrained optimization problem for optimizing the sum of local objective functions of the agents subject to the intersection of their local constraint sets. We present a distributed ``projected subgradient algorithm'' which involves each agent performing a local averaging operation, taking a subgradient step to minimize its own objective function, and projecting on its constraint set. We show that, with an appropriately selected stepsize rule, the agent estimates generated by this algorithm converge to the same optimal solution for the cases when the weights are constant and equal, and when the weights are time-varying but all agents have the same constraint set.
A decentralized modular control framework for robust control of FES-activated walker-assisted paraplegic walking using terminal sliding mode and fuzzy logic control. A major challenge to developing functional electrical stimulation (FES) systems for paraplegic walking and widespread acceptance of these systems is the design of a robust control strategy that provides satisfactory tracking performance. The systems need to be robust against time-varying properties of neuromusculoskeletal dynamics, day-to-day variations, subject-to-subject variations, external dis...
A 15.5 dB, wide signal swing, dynamic amplifier using a common-mode voltage detection technique This paper presents a high-speed, low-power and wide signal swing differential dynamic amplifier using a common-mode voltage detection technique. The proposed dynamic amplifier achieves a 15.5 dB gain with less than 1 dB drop over a signal swing of 1.3 Vpp at an operating frequency of 1.5 GHz with a VDD of 1.2 V in 90 nm CMOS. The power consumption of the proposed circuit can be reduced linearly with operating frequency lowering.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.069111
0.070333
0.066667
0.066667
0.036667
0.022222
0.009971
0
0
0
0
0
0
0
An efficient leader election protocol for mobile networks In this paper, we present a leader election protocol that works under frequent network changes and node mobility. Our proposed protocol, which operates well in ad hoc networks, is based on electing a unique node that outperforms all the other nodes in a cluster identified by our protocol. We discuss our protocol and present an illustrative example to show how our proposed scheme works in a mobile network. We also show how our algorithm succeeds in electing a unique leader in a mobile ad hoc network environment.
Slf-stabiliezing leader election in dynamic networks Three silent self-stabilizing asynchronous distributed algorithms are given for the leader election problem in a dynamic network with unique IDs, using the composite model of computation. A leader is elected for each connected component of the network. A BFS tree is also constructed in each component, rooted at the leader. This election takes O(Diam) rounds, where Diam is the maximum diameter of any component. Links and processes can be added or deleted, and data can be corrupted. After each such topological change or data corruption, the leader and BFS tree are recomputed if necessary. All three algorithms work under the unfair daemon. The three algorithms differ in their leadership stability. The first algorithm, which is the fastest in the worst case, chooses an arbitrary process as the leader. The second algorithm chooses the process of highest priority in each component, where priority can be defined in a variety of ways. The third algorithm has the strictest leadership stability. If the configuration is legitimate, and then any number of topological faults occur at the same time but no variables are corrupted, the third algorithm will converge to a new legitimate state in such a manner that no process changes its choice of leader more than once, and each component will elect a process which was a leader before the fault, provided there is at least one former leader in that component.
Regional consecutive leader election in mobile ad-hoc networks In this paper we introduce the regional consecutive leader election (RCLE) problem, which extends the classic leader election problem to the continuously-changing environment of mobile ad-hoc networks. We assume that mobile nodes, including the currently elected leader, can fail by crashing, and might enter or exit the region of interest at any time. We require the existence of certain paths that ensures a bound on the time for propagation of information within the region. We present and prove correct an algorithm that solves RCLE for a fixed region in 2 or 3-dimensional space. Our algorithm does not rely on the knowledge of the total number of nodes in the system nor on a common startup time. In the second part of the paper, we introduce a condition on mobility that is sufficient to ensure the existence of the paths required by our RCLE algorithm.
An asynchronous leader election algorithm for dynamic networks An algorithm for electing a leader in an asynchronous network with dynamically changing communication topology is presented. The algorithm ensures that, no matter what pattern of topology changes occur, if topology changes cease, then eventually every connected component contains a unique leader. The algorithm combines ideas from the Temporally Ordered Routing Algorithm (TORA) for mobile ad hoc networks [16] with a wave algorithm [21], all within the framework of a height-based mechanism for reversing the logical direction of communication links [6]. It is proved that in certain well-behaved situations, a new leader is not elected unnecessarily.
Fast byzantine agreement in dynamic networks We study Byzantine agreement in dynamic networks where topology can change from round to round and nodes can also experience heavy churn (i.e., nodes can join and leave the network continuously over time). Our main contributions are randomized distributed algorithms that achieve almost-everywhere Byzantine agreement with high probability even under a large number of adaptively chosen Byzantine nodes and continuous adversarial churn in a number of rounds that is polylogarithmic in n (where n is the stable network size). We show that our algorithms are essentially optimal (up to polylogarithmic factors) with respect to the amount of Byzantine nodes and churn rate that they can tolerate by showing a lower bound. In particular, we present the following results: 1. An O(log3 n) round randomized algorithmto achieve almost everywhere Byzantine agreement with high probability under a presence of up to O(√n/polylog(n)) Byzantine nodes and up to a churn of O(√n/polylog(n)) nodes per round. We assume that the Byzantine nodes have knowledge about the entire state of network at every round (including random choices made by all the nodes) and can behave arbitrarily. We also assume that an adversary controls the churn - it has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power (but is oblivious to the topology changes from round to round). Our algorithm requires only polylogarithmic in n bits to be processed and sent (per round) by each node. 2. We also present an O(log3 n) round randomized algorithm that has same guarantees as the above algorithm, but works even when the connectivity of the network is controlled by an adaptive adversary (that can choose the topology based on the current states of the nodes). However, this algorithm requires up to polynomial in n bits to be processed and sent (per round) by each node. 3. We show that the above bounds are essentially the best possible, if one wants fast (i.e., polylogarithmic run time) algorithms, by showing that any (randomized) algorithm to achieve agreement in a dynamic network controlled by an adversary that can churn up to Θ(√n log n) nodes per round should take at least a polynomial number of rounds. Our algorithms are the first-known, fully distributed, Byzantine agreement algorithms in highly dynamic networks. We view our results as a step towards understanding the possibilities and limitations of highly dynamic networks that are subject to malicious behavior by a large number of nodes.
Design and Analysis of a Leader Election Algorithm for Mobile Ad Hoc Networks Leader election is a very important problem, not only in wired networks, but in mobile, ad hoc networks as well. Existing solutions to leader election do not handle frequent topology changes and dynamic nature of mobile networks. In this paper, we present a leader election algorithm that is highly adaptive to arbitrary (possibly concurrent) topological changes and is therefore well-suited for use in mobile ad hoc networks. The algorithm is based on finding an extrema and uses diffusing computations for this purpose. We show, using linear-time temporal logic, that the algorithm is "weakly" self-stabilizing and terminating. We also simulate the algorithm in a mobile ad hoc setting. Through our simulation study, we elaborate on several important issues that can significantly impact performance of such a protocol for mobile ad hoc networks such as choice of signaling, broadcast nature of wireless medium etc. Our simulation study shows that our algorithm is quite effective in that each node has a leader approximately 97-99% of the time in a variety of operating conditions.
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
Time-free and timer-based assumptions can be combined to obtain eventual leadership Leader-based protocols rest on a primitive able to provide the processes with the same unique leader. Such protocols are very common in distributed computing to solve synchronization or coordination problems. Unfortunately, providing such a primitive is far from being trivial in asynchronous distributed systems prone to process crashes. (It is even impossible in fault-prone purely asynchronous systems.) To circumvent this difficulty, several protocols have been proposed that build a leader facility on top of an asynchronous distributed system enriched with additional assumptions. The protocols proposed so far consider either additional assumptions based on synchrony or additional assumptions on the pattern of the messages that are exchanged. Considering systems with n processes and up to f process crashes, 1lesf<n, this paper investigates the combination of a time-free assumption on the message pattern with a synchrony assumption on process speed and message delay. It shows that both types of assumptions can be combined to obtain a hybrid eventual leader protocol benefiting from the best of both worlds. This combined assumption considers a star communication structure involving f+1 processes. Its noteworthy feature lies in the level of combination of both types of assumption that is "as fine as possible" in the sense that each of the f channels of the star has to satisfy a property independently of the property satisfied by each of the f-1 other channels (the f channels do not have to satisfy the same assumption). More precisely, this combined assumption is the following: There is a correct process p (center of the star) and a set Q of f processes q (pnotinQ) such that, eventually, either 1) each time it broadcasts a query, q receives a response from p among the (n-f) first responses to that query, or 2) the channel from p to q is timely. (The processes in the set Q can crash.) A surprisingly simple eventual leader protocol based on this fine grain hybrid assump- - tion is proposed and proved correct. An improvement is also presented
Scratchpad memory: design alternative for cache on-chip memory in embedded systems In this paper we address the problem of on-chip memory selection for computationally intensive applications, by proposing scratch pad memory as an alternative to cache. Area and energy for different scratch pad and cache sizes are computed using the CACTI tool while performance was evaluated using the trace results of the simulator. The target processor chosen for evaluation was AT91M40400. The results clearly establish scratehpad memory as a low power alternative in most situations with an average energy reducation of 40%. Further the average area-time reduction for the seratchpad memory was 46% of the cache memory.
Control-flow integrity principles, implementations, and applications Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, control-flow integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: It is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.
Cache Games -- Bringing Access-Based Cache Attacks on AES to Practice Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process.
The accelerator store: A shared memory framework for accelerator-based systems This paper presents the many-accelerator architecture, a design approach combining the scalability of homogeneous multi-core architectures and system-on-chip's high performance and power-efficient hardware accelerators. In preparation for systems containing tens or hundreds of accelerators, we characterize a diverse pool of accelerators and find each contains significant amounts of SRAM memory (up to 90&percnt; of their area). We take advantage of this discovery and introduce the accelerator store, a scalable architectural component to minimize accelerator area by sharing its memories between accelerators. We evaluate the accelerator store for two applications and find significant system area reductions (30&percnt;) in exchange for small overheads (2&percnt; performance, 0&percnt;--8&percnt; energy). The paper also identifies new research directions enabled by the accelerator store and the many-accelerator architecture.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.039851
0.051549
0.03625
0.017107
0.014286
0.003061
0.000916
0.000011
0
0
0
0
0
0
Systematic software-based self-test for pipelined processors Software-based self-test (SBST) has recently emerged as an effective methodology for the manufacturing test of processors and other components in systems-on-chip (SoCs). By moving test related functions from external resources to the SoC's interior, in the form of test programs that the on-chip processor executes, SBST significantly reduces the need for high-cost, big-iron testers, and enables high-quality at-speed testing and performance binning. Thus far, SBST approaches have focused almost exclusively on the functional (programmer visible) components of the processor. In this paper, we analyze the challenges involved in testing an important component of modern processors, namely, the pipelining logic, and propose a systematic SBST methodology to address them. We first demonstrate that SBST programs that only target the functional components of the processor are not sufficient to test the pipeline logic, resulting in a significant loss of overall processor fault coverage. We further identify the testability hotspots in the pipeline logic using two fully pipelined reduced instruction set computer (RISC) processor benchmarks. Finally, we develop a systematic SBST methodology that enhances existing SBST programs so that they comprehensively test the pipeline logic. The proposed methodology is complementary to previous SBST techniques that target functional components (their results can form the input to our methodology, and thus we can reuse the test development effort behind preexisting SBST programs). We automate our methodology and incorporate it in an integrated software environment (developed using Java, XML, and archC) for the automatic generation of SBST routines for microprocessors. We apply the methodology to the two complex benchmark RISC processors with respect to two fault models: stuck-at fault model and transition delay fault model. Simulation results show that our methodology provides significant improvements for the two fault models, both for the ent- - ire processor (12% fault coverage improvement on average) and for the pipeline logic itself (19% fault coverage improvement on average), compared to a conventional SBST approach.
The ForSpec Temporal Logic: A New Temporal Property-Specification Language In this paper we describe the ForSpec Temporal Logic (FTL), the new temporal property-specification logic of ForSpec, Intel's new formal specification language. The key features of FTL are as follows: it is a linear temporal logic, based on Pnueli's LTL, it is based on a rich set of logical and arithmetical operations on bit vectors to describe state properties, it enables the user to define temporal connectives over time windows, it enables the user to define regular events, which are regular sequences of Boolean events, and then relate such events via special connectives, it enables the user to express properties about the past, and it includes constructs that enable the user to model multiple clock and reset signals, which is useful in the verification of hardware design.
Efficient techniques for automatic verification-oriented test set optimization Most Systems-on-a-Chips include a custom microprocessor core, and time and resource constraints make the design of such devices a challenging task. This paper presents a simulation-based methodology for the automatic completion and refinement of verification test sets. The approach extends the µGP, an evolutionary test program generator, with the possibility to enhance existing test sets. Already devised test programs are not merely included in the new set, but assimilated and used as a starting point for a new test-program cultivation task. Reusing existing material cuts down the time required to generate a verification test set during the microprocessor design. Experimental results are reported on a small pipelined microprocessor, and show the effectiveness of the approach. Additionally, the use of the proposed methodology enabled to experimentally analyze the relationship of the different code coverage metrics used in the test program generation.
An Effective Technique for the Automatic Generation of Diagnosis-Oriented Programs for Processor Cores A large part of microprocessor cores in use today are designed to be cheap and mass produced. The diagnostic process, which is fundamental to improve yield, has to be as cost effective as possible. This paper presents a novel approach to the construction of diagnosis-oriented software-based test sets for microprocessors. The methodology exploits existing manufacturing test sets designed for software-based self-test and improves them by using a new diagnosis-oriented approach. Experimental results are reported in this paper showing the feasibility, robustness, and effectiveness of the approach for diagnosing stuck-at faults on an Intel i8051 processor core.
Microprocessor design faults The complexity of modern microprocessors is such that design faults cannot be avoided. Such design faults can have serious consequences in critical applications. This paper proposes that information should be available from suppliers so that users can assess the suitability of a particular device and take remedial action, should a fault be discovered.
Secure Path Verification Many embedded systems, like medical, sensing, automotive, military, require basic security functions, often referred to as "secure communications". Nowadays, interest has been growing around defining new security related properties, expressing relationships with information flow and access control. In particular, novel research works are focused on formalizing generic security requirements as propagation properties. These kinds of properties, we name them Path properties, are used to see whether it is possible to leak secure data via unexpected paths. In this paper we compare Path properties, described above, with formal security properties expressed in CTL Logic, named Taint properties. We also compare two verification techniques used to verify Path and Taint properties considering an abstraction of a Secure Embedded Architecture discussing the advantages and drawbacks of each approach.
Threadmill: A post-silicon exerciser for multi-threaded processors Post-silicon validation poses unique challenges that bring-up tools must face, such as the lack of observability into the design, the typical instability of silicon bring-up platforms and the absence of supporting software (like an OS or debuggers). These challenges and the need to reach an optimal utilization of the expensive but very fast silicon platforms lead to unique design considerations - like the need to keep the tool simple and to perform most of its operation on platform without interaction with the environment. In this paper we describe a variety of novel techniques optimized for the unique characteristics of the silicon platform. These techniques are implemented in Threadmill - a bare-metal exerciser targeting multi-threaded processors. Threadmill was used in the verification of the POWER7 processor with encouraging results.
Ad-hoc On-Demand Distance Vector Routing This paper describes work carried out as part of the GUIDE project at Lancaster University. The overall aim of the project is to develop a context-sensitive tourist guide for visitors to the city of Lancaster. Visitors are equipped with portable GUIDE ...
Geographic Gossip: Efficient Averaging for Sensor Networks Gossip algorithms for distributed computation are attract ive due to their simplicity, distributed nature, and robust ness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repea tedly recirculating redundant information. For realistic senso r network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing t imes of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of n and p n respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy ǫ using O( n 1.5 p log n log ǫ 1) radio transmissions, which yields a q n log n factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental
Hey, you, get off of my cloud: exploring information leakage in third-party compute clouds Third-party cloud computing represents the promise of outsourcing as applied to computation. Services, such as Microsoft's Azure and Amazon's EC2, allow users to instantiate virtual machines (VMs) on demand and thus purchase precisely the capacity they require when they require it. In turn, the use of virtualization allows third-party cloud providers to maximize the utilization of their sunk capital costs by multiplexing many customer VMs across a shared physical infrastructure. However, in this paper, we show that this approach can also introduce new vulnerabilities. Using the Amazon EC2 service as a case study, we show that it is possible to map the internal cloud infrastructure, identify where a particular target VM is likely to reside, and then instantiate new VMs until one is placed co-resident with the target. We explore how such placement can then be used to mount cross-VM side-channel attacks to extract information from a target VM on the same machine.
An artificial neural network (p,d,q) model for timeseries forecasting Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed.
Minimum-Cost Data Delivery in Heterogeneous Wireless Networks With various wireless technologies developed, a ubiquitous and integrated architecture is envisioned for future wireless communication. An important optimization issue in such an integrated system is how to minimize the overall communication cost by intelligently utilizing the available heterogeneous wireless technologies while, at the same time, meeting the quality-of-service requirements of mobi...
CCFI: Cryptographically Enforced Control Flow Integrity Control flow integrity (CFI) restricts jumps and branches within a program to prevent attackers from executing arbitrary code in vulnerable programs. However, traditional CFI still offers attackers too much freedom to chose between valid jump targets, as seen in recent attacks. We present a new approach to CFI based on cryptographic message authentication codes (MACs). Our approach, called cryptographic CFI (CCFI), uses MACs to protect control flow elements such as return addresses, function pointers, and vtable pointers. Through dynamic checks, CCFI enables much finer-grained classification of sensitive pointers than previous approaches, thwarting all known attacks and resisting even attackers with arbitrary access to program memory. We implemented CCFI in Clang/LLVM, taking advantage of recently available cryptographic CPU instructions (AES-NI). We evaluate our system on several large software packages (including nginx, Apache and memcache) as well as all their dependencies. The cost of protection ranges from a 3--18% decrease in server request rate. We also expect this overhead to shrink as Intel improves the performance AES-NI.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
A dual-mode fast-transient average-current-mode buck converter without slope-compensation A dual-mode fast-transient average-current-mode buck converter without slope-compensation is proposed in this paper. The benefits of the average-current-mode are fast-transient response, simple compensation design, and no requirement for slope-compensation, furthermore, that minimizes some power management problems, such as EMI, size, design complexity, and cost. Average-current-mode control employs two loop control methods, an inner loop for current and an outer one for voltage. The proposed buck converter using the current-sensing and average-current-mode control techniques can be stable even if the duty cycle is greater than 50%. Also, adaptively switch between pulse-width modulation (PWM) and pulse-frequency modulation (PFM) is operated with high conversion efficiency. Under light load condition, the proposed buck converter enters PFM mode to decrease the output ripple. Even more, switching PWM mode realizes a smooth transition under heavy load condition. Therefore, PFM is used to improve the efficiency at light load. Dual-mode buck converter has high conversion efficiency over a wide load conditions. The proposed buck converter has been fabricated with TSMC 0.35@mm CMOS 2P4M processes, the total chip area is 1.45x1.11mm^2. Maximum output current is 450mA at the output voltage 1.8V. When the supply voltage is 3.6V, the output voltage can be 0.8-2.8V. Maximum transient response is less than 10@ms. Finally, the theoretical analysis is verified to be correct by simulations and experiments.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Pulse Frequency Modulation Interpretation of VCOs Enabling VCO-ADC Architectures With Extended Noise Shaping. In this paper, we propose to study voltage controlled oscillators (VCOs) based on the equivalence with pulse frequency modulators (PFMs). This approach is applied to the analysis of VCO-based analog-to-digital converters (VCO-ADCs) and deviates significantly from the conventional interpretation, where VCO-ADCs have been described as the first-order ΔΣ modulators. A first advantage of our approach ...
Signal Folding in A/D Converters Signal folding appears in A/D converters (ADCs) in various ways. In this paper, the evolution of this technique is derived from the fundamentals of quantization to obtain systematic insights. We look upon folding as an automatic multiplexing of zero crossings, which simplifies hardware while preserving the high speed and low latency of a flash ADC. By appreciating similarities between the well-kno...
A 45 nm Resilient Microprocessor Core for Dynamic Variation Tolerance A 45 nm microprocessor core integrates resilient error-detection and recovery circuits to mitigate the clock frequency (FCLK) guardbands for dynamic parameter variations to improve throughput and energy efficiency. The core supports two distinct error-detection designs, allowing a direct comparison of the relative trade-offs. The first design embeds error-detection sequential (EDS) circuits in critical paths to detect late timing transitions. In addition to reducing the Fclk guardbands for dynamic variations, the embedded EDS design can exploit path-activation rates to operate the microprocessor faster than infrequently-activated critical paths. The second error-detection design offers a less-intrusive approach for dynamic timing-error detection by placing a tunable replica circuit (TRC) per pipeline stage to monitor worst-case delays. Although the TRCs require a delay guardband to ensure the TRC delay is always slower than critical-path delays, the TRC design captures most of the benefits from the embedded EDS design with less implementation overhead. Furthermore, while core min-delay constraints limit the potential benefits of the embedded EDS design, a salient advantage of the TRC design is the ability to detect a wider range of dynamic delay variation, as demonstrated through low supply voltage (VCC) measurements. Both error-detection designs interface with error-recovery techniques, enabling the detection and correction of timing errors from fast-changing variations such as high-frequency VCC droops. The microprocessor core also supports two separate error-recovery techniques to guarantee correct execution even if dynamic variations persist. The first technique requires clock control to replay errant instructions at 1/2FCLK. In comparison, the second technique is a new multiple-issue instruction replay design that corrects errant instructions with a lower performance penalty and without requiring clock control. Silico- - n measurements demonstrate that resilient circuits enable a 41% throughput gain at equal energy or a 22% energy reduction at equal throughput, as compared to a conventional design when executing a benchmark program with a 10% VCC droop. In addition, the microprocessor includes a new adaptive clock control circuit that interfaces with the resilient circuits and a phase-locked loop (PLL) to track recovery cycles and adapt to persistent errors by dynamically changing Fclk f°Γ maximum efficiency.
A Mostly Digital VCO-Based CT-SDM With Third-Order Noise Shaping. This paper presents the architectural concept and implementation of a mostly digital voltage-controlled oscillator-analog-to-digital converter (VCO-ADC) with third-order quantization noise shaping. The system is based on the combination of a VCO and a digital counter. It is shown how this combination can function as a continuous-time integrator to form a high-order continuous-time sigma-delta modu...
A 5-GS/s 7.2-ENOB Time-Interleaved VCO-Based ADC Achieving 30.5 fJ/cs This article presents an eight-channel time-interleaved voltage-controlled oscillator (VCO)-based analog-to-digital converter (ADC), achieving 7.2 effective number of bits (ENOBs) at 5 GS/s in 28-nm CMOS. A high-speed ring oscillator with feedforward cross-coupling and a shared tail transistor is combined with an asynchronous counter in order to improve the resolution while minimizing the power co...
A Four-Channel Beamforming Down-Converter in 90-nm CMOS Utilizing Phase-Oversampling In this paper, a 4-GHz, four-channel, analog-beamforming direct-conversion down-converter in 90-nm CMOS is presented. Down-converting vector modulators (VMs) in each channel multiply the inputs with complex beamforming weights before summation between the different channels. The VMs are based on a phase-oversampling technique that allows the synthesis of inherently linear, high-resolution complex gains without complex variable gain amplifiers. A bank of simple passive mixers driven by a multiphase local oscillator (LO) in each VM performs accurate phase shifting with minimal signal distortion, and a pair of transimpedance amplifiers (TIAs) combines the mixer outputs to perform beamforming weighting and combining. Each individual channel achieves 360° phase shift and gain-setting programmability with 8-bit digital control, a complex gain constellation with a mean error-vector magnitude (EVM) of <;2%, and a measured phase error of <; 5.5° at a back-off of 4 dB from the maximum gain setting. The beamformer demonstrates >24-dB blocker rejection for blockers impinging from different directions and 17-dB signal EVM improvement in the presence of an in-channel blocker.
Phase averaging and interpolation using resistor strings or resistor rings for multi-phase clock generation Circuit techniques using resistor strings (R-strings) and resistor rings (R-rings) for phase averaging and interpolation are described. Phase averaging can reduce phase errors, and phase interpolation can increase the number of available phases. In addition to the waveform shape, the averaging and the interpolation performances of the R-strings and R-rings are determined by the clock frequency normalized by a RC time constant of the circuits. To attain better phase accuracy, a smaller RC time constant is required, but at the expense of larger power dissipation. To demonstrate the resistor ring's capability of phase averaging and interpolation, a 125-MHz 8-bit digital-to-phase converter (DPC) was designed and fabricated using a standard 0.35-μm SPQM CMOS technology. Measurement results show that the DPC attains 8-bit resolution using the proposed phase averaging and interpolation technique.
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
A Case for Intelligent RAM Two trends call into question the current practice of microprocessors and DRAMs being fabricated as different chips on different fab lines: 1) the gap between processor and DRAM speed is growing at 50% per year; and 2) the size and organization of memory on a single DRAM chip is becoming awkward to use in a system, yet size is growing at 60% per year. Intelligent RAM, or IRAM, merges processing and memory into a single chip to lower memory latency, increase memory bandwidth, and improve energy efficiency as well as to allow more flexible selection of memory size and organization. In addition, IRAM promises savings in power and board area. We review the state of microprocessors and DRAMs today, explore some of the opportunities and challenges for IRAMs, and finally estimate performance and energy efficiency of three IRAM designs.
Communication-efficient leader election and consensus with limited link synchrony We study the degree of synchrony required to implement the leader election failure detector Ω and to solve consensus in partially synchronous systems. We show that in a system with n processes and up to f process crashes, one can implement Ω and solve consensus provided there exists some (unknown) correct process with f outgoing links that are eventually timely. In the special case where f = 1 , an important case in practice, this implies that to implement Ω and solve consensus it is sufficient to have just one eventually timely link -- all the other links in the system, Θ(n2) of them, may be asynchronous. There is no need to know which link p → q is eventually timely, when it becomes timely, or what is its bound on message delay. Surprisingly, it is not even required that the source p or destination q of this link be correct: either p or q may actually crash, in which case the link p → q is eventually timely in a trivial way, and it is useless for sending messages. We show that these results are in a sense optimal: even if every process has f - 1 eventually timely links, neither Ω nor consensus can be solved. We also give an algorithm that implements Ω in systems where some correct process has f outgoing links that are eventually timely, such that eventually only f links carry messages, and we show that this is optimal. For f = 1 , this algorithm ensures that all the links, except for one, eventually become quiescent.
A 5-Gb/s ADC-Based Feed-Forward CDR in 65 nm CMOS This paper presents an ADC-based CDR that blindly samples the received signal at twice the data rate and uses these samples to directly estimate the locations of zero crossings for the purpose of clock and data recovery. We successfully confirmed the operation of the proposed CDR architecture at 5 Gb/s. The receiver is implemented in 65 nm CMOS, occupies 0.51 mm(2) and consumes 178.4 mW at 5 Gb/s.
Efficiency of a Regenerative Direct-Drive Electromagnetic Active Suspension. The efficiency and power consumption of a direct-drive electromagnetic active suspension system for automotive applications are investigated. A McPherson suspension system is considered, where the strut consists of a direct-drive brushless tubular permanent-magnet actuator in parallel with a passive spring and damper. This suspension system can both deliver active forces and regenerate power due to imposed movements. A linear quadratic regulator controller is developed for the improvement of comfort and handling (dynamic tire load). The power consumption is simulated as a function of the passive damping in the active suspension system. Finally, measurements are performed on a quarter-car test setup to validate the analysis and simulations.
Software Defined Integrated RF Frontend Receiver Design.
A 12-Bit Dynamic Tracking Algorithm-Based SAR ADC With Real-Time QRS Detection A 12-bit successive approximation register (SAR) ADC based on dynamic tracking algorithm and a real-time QRS-detection algorithm are proposed. The dynamic tracking algorithm features two tracking windows which are adjacent to prediction interval. This algorithm is able to track down the input signal's variation range and automatically adjust the subrange interval and update prediction code. QRS-complex detection algorithm integrates synchronous time sequential ADC and realtime QRS-detector. The chip is fabricated in a standard 0.13 μm CMOS process with a 0.6 V supply. Measurement results show that proposed ADC exhibits 10.72 effective number of bit (ENOB) and 79.63 dB spur-free dynamic range (SFDR) at 10k Hz sample rate given 41.5 Hz sinusoid input. The DNL and INL are bounded at -0.6/0.62 LSB and -0.67/1.43 LSBs. The ADC achieves FoM of 48 fJ/conversion-step at the best case. Also, the prototype is experimented with ECG signal input and extracts the heart beat signal successfully.
1.24
0.24
0.24
0.24
0.24
0.12
0.048
0
0
0
0
0
0
0
Probabilistic Neural Network With Complex Exponential Activation Functions in Image Recognition. If the training data set in image recognition task is not very large, the feature extraction with a convolutional neural network is usually applied. Here, we focus on the nonparametric classification of extracted feature vectors using the probabilistic neural network (PNN). The latter is characterized by the high runtime and memory space complexity. We propose to overcome these drawbacks by replac...
A Probabilistic Neural-Fuzzy Learning System for Stochastic Modeling A probabilistic fuzzy neural network (PFNN) with a hybrid learning mechanism is proposed to handle complex stochastic uncertainties. Fuzzy logic systems (FLSs) are well known for vagueness processing. Embedded with the probabilistic method, an FLS will possess the capability to capture stochastic uncertainties. Further enhanced with the neural learning, it will be able to work under time-varying stochastic environment. Integrated with a statistical process control (SPC) based monitoring method, the PFNN can maintain the robust modeling performance. Finally, the successful simulation demonstrates the modeling effectiveness of the proposed PFNN under the time-varying stochastic conditions.
Design of Fuzzy-Neural-Network-Inherited Backstepping Control for Robot Manipulator Including Actuator Dynamics This study presents the design and analysis of an intelligent control system that inherits the systematic and recursive design methodology for an n-link robot manipulator, including actuator dynamics, in order to achieve a high-precision position tracking with a firm stability and robustness. First, the coupled higher order dynamic model of an n-link robot manipulator is introduced briefly. Then, a conventional backstepping control (BSC) scheme is developed for the joint position tracking of the robot manipulator. Moreover, a fuzzy-neural-network-inherited BSC (FNNIBSC) scheme is proposed to relax the requirement of detailed system information to improve the robustness of BSC and to deal with the serious chattering that is caused by the discontinuous function. In the FNNIBSC strategy, the FNN framework is designed to mimic the BSC law, and adaptive tuning algorithms for network parameters are derived in the sense of the projection algorithm and Lyapunov stability theorem to ensure the network convergence as well as stable control performance. Numerical simulations and experimental results of a two-link robot manipulator that are actuated by dc servomotors are provided to justify the claims of the proposed FNNIBSC system, and the superiority of the proposed FNNIBSC scheme is also evaluated by quantitative comparison with previous intelligent control schemes.
Reactive Power Control of Three-Phase Grid-Connected PV System During Grid Faults Using Takagi–Sugeno–Kang Probabilistic Fuzzy Neural Network Control An intelligent controller based on the Takagi-Sugeno-Kang-type probabilistic fuzzy neural network with an asymmetric membership function (TSKPFNN-AMF) is developed in this paper for the reactive and active power control of a three-phase grid-connected photovoltaic (PV) system during grid faults. The inverter of the three-phase grid-connected PV system should provide a proper ratio of reactive power to meet the low-voltage ride through (LVRT) regulations and control the output current without exceeding the maximum current limit simultaneously during grid faults. Therefore, the proposed intelligent controller regulates the value of reactive power to a new reference value, which complies with the regulations of LVRT under grid faults. Moreover, a dual-mode operation control method of the converter and inverter of the three-phase grid-connected PV system is designed to eliminate the fluctuation of dc-link bus voltage under grid faults. Furthermore, the network structure, the online learning algorithm, and the convergence analysis of the TSKPFNN-AMF are described in detail. Finally, some experimental results are illustrated to show the effectiveness of the proposed control for the three-phase grid-connected PV system.
Discrete-Time Quasi-Sliding-Mode Control With Prescribed Performance Function and its Application to Piezo-Actuated Positioning Systems. In this paper, the constrained control problem of the prescribed performance control technique is discussed in discrete-time domain for single input-single output dynamical systems. The goal of this design is to maintain the tracking error trajectory in a predefined convergence zone described by a performance function in the presence of the uncertainties. In order to achieve this goal, the discret...
A Survey On Sliding Mode Control For Networked Control Systems In the framework of the networked control systems (NCSs), the components are connected with each other over a shared band-limited network. The merits of NCSs include easy extensibility, resource sharing, high reliability and so forth. However, the insertion of the communication network brings many challenges, such as network-induced phenomena and cyber-security, which should be handled properly. On the other hand, the sliding mode control (SMC) has become an effective scheme for the synthesis of NCSs due to its strong robustness and SMC has wide applications in NCSs. In this paper, some recent advances on SMC for NCSs are reviewed. In particular, some new SMC schemes for NCSs subject to time-delay, packet losses, quantisation and uncertainty/disturbance are summarised firstly. Subsequently, the problem of SMC for NCSs under scheduling protocols is discussed, where different communication protocols are introduced for the energy saving purpose during the synthesis of NCSs. Next, some recent results on SMC for NCSs with actuator/sensor fault and cyber-attack are recalled. Finally, the conclusion is provided and the potential research challenges on SMC for NCSs are pointed out.
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
Design Techniques for Fully Integrated Switched-Capacitor DC-DC Converters. This paper describes design techniques to maximize the efficiency and power density of fully integrated switched-capacitor (SC) DC-DC converters. Circuit design methods are proposed to enable simplified gate drivers while supporting multiple topologies (and hence output voltages). These methods are verified by a proof-of-concept converter prototype implemented in 0.374 mm2 of a 32 nm SOI process. ...
Distributed reset A reset subsystem is designed that can be embedded in an arbitrary distributed system in order to allow the system processes to reset the system when necessary. Our design is layered, and comprises three main components: a leader election, a spanning tree construction, and a diffusing computation. Each of these components is self-stabilizing in the following sense: if the coordination between the up-processes in the system is ever lost (due to failures or repairs of processes and channels), then each component eventually reaches a state where coordination is regained. This capability makes our reset subsystem very robust: it can tolerate fail-stop failures and repairs of processes and channels, even when a reset is in progress
Distributed multi-agent optimization with state-dependent communication We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. We study a projected multi-agent subgradient algorithm under state-dependent communication. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a “disagreement metric” between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence.
Yet another MicroArchitectural Attack:: exploiting I-Cache MicroArchitectural Attacks (MA), which can be considered as a special form of Side-Channel Analysis, exploit microarchitectural functionalities of processor implementations and can compromise the security of computational environments even in the presence of sophisticated protection mechanisms like virtualization and sandboxing. This newly evolving research area has attracted significant interest due to the broad application range and the potentials of these attacks. Cache Analysis and Branch Prediction Analysis were the only types of MA that had been known publicly. In this paper, we introduce Instruction Cache (I-Cache) as yet another source of MA and present our experimental results which clearly prove the practicality and danger of I-Cache Attacks.
A decentralized modular control framework for robust control of FES-activated walker-assisted paraplegic walking using terminal sliding mode and fuzzy logic control. A major challenge to developing functional electrical stimulation (FES) systems for paraplegic walking and widespread acceptance of these systems is the design of a robust control strategy that provides satisfactory tracking performance. The systems need to be robust against time-varying properties of neuromusculoskeletal dynamics, day-to-day variations, subject-to-subject variations, external dis...
PUMP: a programmable unit for metadata processing We introduce the Programmable Unit for Metadata Processing (PUMP), a novel software-hardware element that allows flexible computation with uninterpreted metadata alongside the main computation with modest impact on runtime performance (typically 10--40% for single policies, compared to metadata-free computation on 28 SPEC CPU2006 C, C++, and Fortran programs). While a host of prior work has illustrated the value of ad hoc metadata processing for specific policies, we introduce an architectural model for extensible, programmable metadata processing that can handle arbitrary metadata and arbitrary sets of software-defined rules in the spirit of the time-honored 0-1-∞ rule. Our results show that we can match or exceed the performance of dedicated hardware solutions that use metadata to enforce a single policy, while adding the ability to enforce multiple policies simultaneously and achieving flexibility comparable to software solutions for metadata processing. We demonstrate the PUMP by using it to support four diverse safety and security policies---spatial and temporal memory safety, code and data taint tracking, control-flow integrity including return-oriented-programming protection, and instruction/data separation---and quantify the performance they achieve, both singly and in combination.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
LEADMesh: Design and analysis of an efficient leader election protocol for wireless mesh networks. Leader election problem has been studied in the past to improve the efficiency of both distributed systems and wireless ad hoc and sensor networks. Yet, few research has been done on the leader election process for wireless mesh networks. Most of the existing leader election protocols consider wireless networks in general, without focusing on the particularities of mesh networks. Thus, these protocols are not suitable for wireless mesh networks. The lack of research on this issue has motivated us to design a leader election protocol dedicated to wireless mesh networks. In this work, we propose an efficient leader election protocol for wireless mesh network, which is based on the construction of a spanning tree that includes all wireless mesh routers. The protocol elects the node with the longest remaining battery life. In this paper, we give a detailed description of the proposed protocol, prove its correctness, discuss its message and time complexities and then evaluate its performance through simulation using ns-2. We show that our protocol is efficient and scales well with the increase in number of mesh routers and mesh clients.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Robust output feedback model predictive control of constrained linear systems This paper provides a solution to the problem of robust output feedback model predictive control of constrained, linear, discrete-time systems in the presence of bounded state and output disturbances. The proposed output feedback controller consists of a simple, stable Luenberger state estimator and a recently developed, robustly stabilizing, tube-based, model predictive controller. The state estimation error is bounded by an invariant set. The tube-based controller ensures that all possible realizations of the state trajectory lie in a simple uncertainty tube the 'center' of which is the solution of a nominal (disturbance-free) system and the 'cross-section' of which is also invariant. Satisfaction of the state and input constraints for the original system is guaranteed by employing tighter constraint sets for the nominal system. The complexity of the resultant controller is similar to that required for nominal model predictive control.
Quadratic programming with one negative eigenvalue is NP-hard We show that the problem of minimizing a concave quadratic function with one concave direction is NP-hard. This result can be interpreted as an attempt to understand exactly what makes nonconvex quadratic programming problems hard. Sahni in 1974 [8] showed that quadratic programming with a negative definite quadratic term (n negative eigenvalues) is NP-hard, whereas Kozlov, Tarasov and Hacijan [2] showed in 1979 that the ellipsoid algorithm solves the convex quadratic problem (no negative eigenvalues) in polynomial time. This report shows that even one negative eigenvalue makes the problem NP-hard.
Energy Management Strategies for Vehicular Electric Power Systems In the near future, a significant increase in electric power consumption in vehicles is expected. To limit the associated increase in fuel consumption and exhaust emissions, smart strategies for the generation, storage/retrieval, distribution, and consumption of electric power will be used. Inspired by the research on energy management for hybrid electric vehicles (HEVs), this paper presents an ex...
Observer-Based Control of Discrete-Time LPV Systems With Uncertain Parameters In this note, linear matrix inequality-based design conditions are presented for observer-based controllers that stabilize discrete-time linear parameter-varying systems in the situation where the parameters are not exactly known, but are only available with a finite accuracy. The presented framework allows to make tradeoffs between the admissible level of parameter uncertainty on the one hand and the transient performance on the other. In addition, the level of parameter uncertainty can be maximized while still guaranteeing closed-loop stability.
Model predictive control: theory and practice—a survey We refer to Model Predictive Control (MPC) as that family of controllers in which there is a direct use of an explicit and separately identifiable model. Control design methods based on the MPC concept have found wide acceptance in industrial applications and have been studied by academia. The reason for such popularity is the ability of MPC designs to yield high performance control systems capable of operating without expert intervention for long periods of time. In this paper the issues of importance that any control system should address are stated. MPC techniques are then reviewed in the light of these issues in order to point out their advantages in design and implementation. A number of design techniques emanating from MPC, namely Dynamic Matrix Control, Model Algorithmic Control, Inferential Control and Internal Model Control, are put in perspective with respect to each other and the relation to more traditional methods like Linear Quadratic Control is examined. The flexible constraint handling capabilities of MPC are shown to be a significant advantage in the context of the overall operating objectives of the process industries and the 1-, 2-, and ∞-norm formulations of the performance objective are discussed. The application of MPC to non-linear systems is examined and it is shown that its main attractions carry over. Finally, it is explained that though MPC is not inherently more or less robust than classical feedback, it can be adjusted more easily for robustness.
A Global Algorithm for Nonlinear Semidefinite Programming In this paper we propose a global algorithm for solving nonlinear semidefinite programming problems. This algorithm, inspired by the classic SQP (sequentially quadratic programming) method, modifies the S-SDP (sequentially semidefinite programming) local method by using a nondifferentiable merit function combined with a line search strategy.
Interval type-2 fuzzy logic systems: theory and design We present the theory and design of interval type-2 fuzzy logic systems (FLSs). We propose an efficient and simplified method to compute the input and antecedent operations for interval type-2 FLSs: one that is based on a general inference formula for them. We introduce the concept of upper and lower membership functions (MFs) and illustrate our efficient inference method for the case of Gaussian primary MFs. We also propose a method for designing an interval type-2 FLS in which we tune its parameters. Finally, we design type-2 FLSs to perform time-series forecasting when a nonstationary time-series is corrupted by additive noise where SNR is uncertain and demonstrate an improved performance over type-1 FLSs
GloMoSim: a library for parallel simulation of large-scale wireless networks Abstract Anumber,of library-based parallel ,and sequential network,simulators ,have ,been ,designed. This paper describes a library, called GloMoSim (for Global Mobile system Simulator), for parallel simulation of wireless networks. GloMoSim has been designed to be ,extensible and composable: the communication ,protocol stack for wireless networks is divided into a set of layers, each with its own API. Models of protocols at one layer interact with those at a lower (or higher) layer only via these APIs. The modular,implementation,enables consistent comparison,of multiple,protocols ,at a ,given ,layer. The parallel implementation,of GloMoSim ,can be executed ,using a variety of conservative synchronization protocols, which include,the ,null ,message ,and ,conditional ,event algorithms. This paper describes the GloMoSim library, addresses,a number ,of issues ,relevant ,to its parallelization, and presents a set of experimental results onthe IBM 9076 SP, a distributed memory multi- computer. These experiments use models constructed from the library modules. 1,Introduction The,rapid ,advancement ,in portable ,computing platforms and wireless communication,technology has led tosignificant interest in mobile ,computing ,and mobile networking. Two primary forms of mobile ,computing ,are becoming popular: first, mobile computers continue to heavily use wired network infrastructures.Instead of being hardwired to a single location (or IP address), a computer can,dynamically ,move ,to multiple ,locations ,while maintaining,application transparency. Protocols such as
Gossip-Based Computation of Aggregate Information Over the last decade, we have seen a revolution in connectivity between computers, and a resulting paradigm shift from centralized to highly distributed systems. With massive scale also comes massive instability, as node and link failures become the norm rather than the exception. For such highly volatile systems, decentralized gossip-based protocols are emerging as an approach to maintaining simplicity and scalability while achieving fault-tolerant information dissemination.In this paper, we study the problem of computing aggregates with gossip-style protocols. Our first contribution is an analysis of simple gossip-based protocols for the computations of sums, averages, random samples, quantiles, and other aggregate functions, and we show that our protocols converge exponentially fast to the true answer when using uniform gossip.Our second contribution is the definition of a precise notion of the speed with which a node's data diffuses through the network. We show that this diffusion speed is at the heart of the approximation guarantees for all of the above problems. We analyze the diffusion speed of uniform gossip in the presence of node and link failures, as well as for flooding-based mechanisms. The latter expose interesting connections to random walks on graphs.
Software-defined radio receiver: dream to reality This article describes a fully integrated 90 nm CMOS software-defined radio receiver operating in the 800 MHz to 5 GHz band. Unlike the classical SDR paradigm, which digitizes the whole spectrum uniformly, this receiver acts as a signal conditioner for the analog-to-digital converters, emphasizing only the wanted channel. Thus, the ADCs operate with modest resolution and sample rate, consuming low power. This approach makes portable SDR a reality
Yet another MicroArchitectural Attack:: exploiting I-Cache MicroArchitectural Attacks (MA), which can be considered as a special form of Side-Channel Analysis, exploit microarchitectural functionalities of processor implementations and can compromise the security of computational environments even in the presence of sophisticated protection mechanisms like virtualization and sandboxing. This newly evolving research area has attracted significant interest due to the broad application range and the potentials of these attacks. Cache Analysis and Branch Prediction Analysis were the only types of MA that had been known publicly. In this paper, we introduce Instruction Cache (I-Cache) as yet another source of MA and present our experimental results which clearly prove the practicality and danger of I-Cache Attacks.
Distributed Primal-Dual Subgradient Method for Multiagent Optimization via Consensus Algorithms. This paper studies the problem of optimizing the sum of multiple agents' local convex objective functions, subject to global convex inequality constraints and a convex state constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the Lagrangian function associated with the problem, we propose a distributed algorithm, named the distributed primal-dual subgradient method, to provide approximate saddle points of the Lagrangian function, based on the distributed average consensus algorithms. Under Slater's condition, we obtain bounds on the convergence properties of the proposed method for a constant step size. Simulation examples are provided to demonstrate the effectiveness of the proposed method.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.220667
0.220667
0.220667
0.220667
0.110333
0.055167
0.004571
0
0
0
0
0
0
0
3-D Performance Analysis and Multiobjective Optimization of Coreless-Type PM Linear Synchronous Motors. This paper presents a three-dimensional (3-D) performance analysis and multiobjective design optimization of coreless-type permanent magnet linear synchronous motors. The average of open circuit magnetic field distribution is analytically predicted by solving two Laplace&#39;s equations. The winding factor calculation, which requires an approach different from conventional slotted motors, is provided ...
Maximum Ambiguity-Based Sample Selection in Fuzzy Decision Tree Induction Sample selection is to select a number of representative samples from a large database such that a learning algorithm can have a reduced computational cost and an improved learning accuracy. This paper gives a new sample selection mechanism, i.e., the maximum ambiguity-based sample selection in fuzzy decision tree induction. Compared with the existing sample selection methods, this mechanism selects the samples based on the principle of maximal classification ambiguity. The major advantage of this mechanism is that the adjustment of the fuzzy decision tree is minimized when adding selected samples to the training set. This advantage is confirmed via the theoretical analysis of the leaf-nodes' frequency in the decision trees. The decision tree generated from the selected samples usually has a better performance than that from the original database. Furthermore, experimental results show that generalization ability of the tree based on our selection mechanism is far more superior to that based on random selection mechanism.
General Airgap Field Modulation Theory For Electrical Machines This paper proposes a general field modulation theory for electrical machines by introducing magnetomotive force modulation operator to characterize the influence of short-circuited coil, variable reluctance, and flux guide on the primitive magnetizing magnetomotive force distribution established by field winding function multiplied by field current along the airgap peripheral. Magnetically anisotropic stator and rotor behave like modulators to produce a spectrum of field harmonics and the armature winding plays the role of a spatial filter to extract effective field harmonics to contribute the corresponding flux linkage and induce the electromotive force. The developed field modulation theory not only unifies the principle analysis of a large variety of electrical machines, including conventional dc machine, induction machine, and synchronous machine which are just special cases of the general field modulated machines, thus eliminating the problem of the machine theory fragmentation, but also provides a powerful guidance for inventing new machine topologies.
Discovering the Relationship Between Generalization and Uncertainty by Incorporating Complexity of Classification. The generalization ability of a classifier learned from a training set is usually dependent on the classifier&#39;s uncertainty, which is often described by the fuzziness of the classifier&#39;s outputs on the training set. Since the exact dependency relation between generalization and uncertainty of a classifier is quite complicated, it is difficult to clearly or explicitly express this relation in gener...
Recent advances in deep learning
Random Forests Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, &ast;&ast;&ast;, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
Chains of recurrences—a method to expedite the evaluation of closed-form functions Chains of Recurrences (CR's) are introduced as an effective method to evaluate functions at regular intervals. Algebraic properties of CR's are examined and an algorithm that constructs a CR for a given function is explained. Finally, an implementation of the method in MAXIMA/Common Lisp is discussed.
Consensus problems in networks of agents with switching topology and time-delays. In this paper, we discuss consensus problems for a network of dynamic agents with flxed and switching topologies. We analyze three cases: i) networks with switching topology and no time-delays, ii) networks with flxed topology and communication time-delays, and iii) max-consensus problems (or leader determination) for groups of discrete-time agents. In each case, we introduce a linear/nonlinear consensus protocol and provide convergence analysis for the proposed distributed algorithm. Moreover, we establish a connection between the Fiedler eigenvalue of the information ∞ow in a network (i.e. algebraic connectivity of the network) and the negotiation speed (or performance) of the corresponding agreement protocol. It turns out that balanced digraphs play an important role in addressing average-consensus problems. We intro- duce disagreement functions that play the role of Lyapunov functions in convergence analysis of consensus protocols. A distinctive feature of this work is to address consen- sus problems for networks with directed information ∞ow. We provide analytical tools that rely on algebraic graph theory, matrix theory, and control theory. Simulations are provided that demonstrate the efiectiveness of our theoretical results.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
Linear Amplification with Nonlinear Components A technique for producing bandpass linear amplification with nonlinear components (LINC) is described. The bandpass signal first is separated into two constant envelope component signals. All of the amplitude and phase information of the original bandpass signal is contained in phase modulation on the component signals. These constant envelope signals can be amplified or translated in frequency by amplifiers or mixers which have nonlinear input-output amplitude transfer characteristics. Passive linear combining of the amplified and/or translated component signals produces an amplified and/or translated replica of the original signal.
Sensor network gossiping or how to break the broadcast lower bound Gossiping is an important problem in Radio Networks that has been well studied, leading to many important results. Due to strong resouce limitations of sensor nodes, previous solutions are frequently not feasible in Sensor Networks. In this paper, we study the gossiping problem in the restrictive context of Sensor Networks. By exploiting the geometry of sensor node distributions, we present reduced, optimal running time of O(D + Δ) for an algorithm that completes gossiping with high probability in a Sensor Network of unknown topology and adversarial wake-up, where D is the diameter and Δ the maximum degree of the network. Given that an algorithm for gossiping also solves the broadcast problem, our result proves that the classic lower bound of [16] can be broken if nodes are allowed to do preprocessing.
Towards elastic SDR architectures using dynamic task management. SDR platforms integrating several types and numbers of processing elements in System-on-Chips become an attractive solution for baseband processing in wireless systems. In order to cope with the diversity of protocol applications and the heterogeneity of multi-core architectures, a hierarchical approach for workload distribution is proposed in this paper. Specifically, a system-level scheduler is employed to map applications to multiple processing clusters, complemented with a cluster-level scheduler - the CoreManager - for dynamic resource allocation and configuration as well as for task and data scheduling. A performance analysis of the proposed approach is presented, which shows the advantages of dynamic scheduling against a static approach for variable workloads in the LTE-Advanced uplink multi-user scenarios.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.2
0.2
0.2
0.2
0.2
0.033333
0
0
0
0
0
0
0
0
A Low-Power Cmos Lna For Ultra-Wideband Wireless Receivers In this paper, a low-power ultra-wideband (UWB) low-noise amplifier (LNA) is proposed. Here, we propose a structure to combine the common gate with band pass filters, which can reduce parasitic capacitance of the transistor and to achieve input wideband matching. The pi-section LC network technique is employed in the LNA to achieve sufficient. at gain. A bias resistor of large value is placed between the source and the body nodes to prevent body effect and reduce noise. Numerical simulation based on TSMC 0.18 mu m 1P6M process. It achieved 10.0 similar to 12.4 dB gain from 3 GHz to 10.6 GHz and 3.25 dB noise figure in 8.5 GHz, operates from 1.5 V power supply, and dissipates 3 mW without the output buffer.
An ultra-wideband CMOS low noise amplifier for 3-5-GHz UWB system An ultra-wideband (UWB) CMOS low noise amplifier (LNA) topology that combines a narrowband LNA with a resistive shunt-feedback is proposed. The resistive shunt-feedback provides wideband input matching with small noise figure (NF) degradation by reducing the Q-factor of the narrowband LNA input and flattens the passband gain. The proposed UWB amplifier is implemented in 0.18-/spl mu/m CMOS technol...
A Broadband Noise-Canceling CMOS LNA for 3.1–10.6-GHz UWB Receivers An ultra-wideband 3.1-10.6-GHz low-noise amplifier employing a broadband noise-canceling technique is presented. By using the proposed circuit and design methodology, the noise from the matching device is greatly suppressed over the desired UWB band, while the noise from other devices performing noise cancellation is minimized by the systematic approach. Fabricated in a 0.18-mum CMOS process, the ...
A novel power optimization technique for ultra-low power RFICs This paper presents a novel power optimization technique for ultra-low power (ULP) RFICs. A new figure of merit, namely the gmfT-to-current ratio, (gmfT/ID), is defined for a MOS transistor, which accounts for both the unity-gain frequency and current consumption. It is demonstrated both analytically and experimentally that the gmfT/ID reaches its maximum value in moderate inversion region. Next, using the proposed method, a power optimized common-gate low-noise amplifier (LNA) with active load has been designed and fabricated in a CMOS 0.18μm process operating at 950MHz. Measurement results show a noise-figure (NF) of 4.9dB and a small signal gain of 15.6dB with a record-breaking power dissipation of only 100μW.
A 3.6mW differential common-gate CMOS LNA with positive-negative feedback A common-gate (CG) LNA has been widely investigated because it features superior bandwidth, linearity, stability, and robustness to PVT variations compared to a common-source (CS) topology. In spite of these advantages, the dependence of gain and NF on the restricted transconductance (gm) renders this topology unsuitable for various wireless applications. The input impedance of a CG LNA is simplified as Mgm, and the noise factor is inversely proportional to gm. In order to achieve high gain and low NF, gm should be increased, which deteriorates the 50Omega input impedance matching for a conventional CG LNA.
Bandwidth Extension Techniques for CMOS Amplifiers Inductive-peaking-based bandwidth extension techniques for CMOS amplifiers in wireless and wireline applications are presented. To overcome the conventional limits on bandwidth extension ratios, these techniques augment inductive peaking using capacitive splitting and magnetic coupling. It is shown that a critical design constraint for optimum bandwidth extension is the ratio of the drain capacita...
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
Broadband MIMO-OFDM Wireless Communications Orthogonal frequency division multiplexing (OFDM) is a popular method for high data rate wireless transmission. OFDM may be combined with antenna arrays at the transmitter and receiver to increase the diversity gain and/or to enhance the system capacity on time-varying and frequency-selective channels, resulting in a multiple-input multiple-output (MIMO) configuration. The paper explores various p...
Supporting Aggregate Queries Over Ad-Hoc Wireless Sensor Networks We show how the database community's notion of a generic query interface for data aggregation can be applied to ad-hoc networks of sensor devices. As has been noted in the sensor network literature, aggregation is important as a data reduction tool; networking approaches, however, have focused on application specific solutions, whereas our in-network aggregation approach is driven by a general purpose, SQL-style interface that can execute queries over any type of sensor data while providing opportunities for significant optimization. We present a variety of techniques to improve the reliability and performance of our solution. We also show how grouped aggregates can be efficiently computed and offer a comparison to related systems and database projects.
Side-Channel Leaks in Web Applications: A Reality Today, a Challenge Tomorrow With software-as-a-service becoming mainstream, more and more applications are delivered to the client through the Web. Unlike a desktop application, a web application is split into browser-side and server-side components. A subset of the application’s internal information flows are inevitably exposed on the network. We show that despite encryption, such a side-channel information leak is a realistic and serious threat to user privacy. Specifically, we found that surprisingly detailed sensitive information is being leaked out from a number of high-profile, top-of-the-line web applications in healthcare, taxation, investment and web search: an eavesdropper can infer the illnesses/medications/surgeries of the user, her family income and investment secrets, despite HTTPS protection; a stranger on the street can glean enterprise employees' web search queries, despite WPA/WPA2 Wi-Fi encryption. More importantly, the root causes of the problem are some fundamental characteristics of web applications: stateful communication, low entropy input for better interaction, and significant traffic distinctions. As a result, the scope of the problem seems industry-wide. We further present a concrete analysis to demonstrate the challenges of mitigating such a threat, which points to the necessity of a disciplined engineering practice for side-channel mitigations in future web application developments.
An almost necessary and sufficient condition for robust stability of closed-loop systems with disturbance observer The disturbance observer (DOB)-based controller has been widely employed in industrial applications due to its powerful ability to reject disturbances and compensate plant uncertainties. In spite of various successful applications, no necessary and sufficient condition for robust stability of the closed loop systems with the DOB has been reported in the literature. In this paper, we present an almost necessary and sufficient condition for robust stability when the Q-filter has a sufficiently small time constant. The proposed condition indicates that robust stabilization can be achieved against arbitrarily large (but bounded) uncertain parameters, provided that an outer-loop controller stabilizes the nominal system, and uncertain plant is of minimum phase.
IEEE 802.11 wireless LAN implemented on software defined radio with hybrid programmable architecture This paper describes a prototype software defined radio (SDR) transceiver on a distributed and heterogeneous hybrid programmable architecture; it consists of a central processing unit (CPU), digital signal processors (DSPs), and pre/postprocessors (PPPs), and supports both Personal Handy Phone System (PHS), and IEEE 802.11 wireless local area network (WLAN). It also supports system switching between PHS and WLAN and over-the-air (OTA) software downloading. In this paper, we design an IEEE 802.11 WLAN around the SDR; we show the software architecture of the SDR prototype and describe how it handles the IEEE 802.11 WLAN protocol. The medium access control (MAC) sublayer functions are executed on the CPU, while the physical layer (PHY) functions such as modulation/demodulation are processed by the DSPs; higher speed digital signal processes are run on the PPP implemented on a field-programmable gate array (FPGA). The most difficult problem in implementing the WLAN in this way is meeting the short interframe space (SIFS) requirement of the IEEE 802.11 standard; we elucidate the potential weakness of the current configuration and specify a way of implementing the IEEE 802.11 protocol that avoids this problem. This paper also describes an experimental evaluation of the prototype for WLAN use, the results of which agree well with computer-simulation results.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.24
0.010248
0.002449
0.001429
0.000952
0.000204
0
0
0
0
0
0
0
0
Cascade High Gain Predictors for a Class of Nonlinear Systems This work presents a set of cascade high gain predictors to reconstruct the vector state of triangular nonlinear systems with delayed output. By using a Lyapunov-Krasvoskii approach, simple sufficient conditions ensuring the exponential convergence of the observation error towards zero are given. All predictors used in the cascade have the same structure. This feature will greatly improve the easiness of their implementation. This result is illustrated by some simulations.
Robust compensation of a chattering time-varying input delay We investigate the design of a prediction-based controller for a linear system subject to a time-varying input delay, not necessarily causal. This means that the information feeding the system can be older than ones previously received. We propose to use the current delay value in the prediction employed in the control law. Modeling the input delay as a transport Partial Differential Equation, we prove asymptotic tracking of the system state, providing that the average ℒ2-norm of the delay time-derivative is sufficiently small. This result is obtained by generalizing Halanay inequality to time-varying differential inequalities.
Robustness of Adaptive Control under Time Delays for Three-Dimensional Curve Tracking. We analyze the robustness of a class of controllers that enable three-dimensional curve tracking by a free moving particle. The free particle tracks the closest point on the curve. By building a strict Lyapunov function and robustly forward invariant sets, we show input-to-state stability under predictable tolerance and safety bounds that guarantee robustness under control uncertainty, input delays, and a class of polygonal state constraints, including adaptive tracking and parameter identification under unknown control gains. Such an understanding may provide certified performance when the control laws are applied to real-life systems.
A Chain Observer for Nonlinear Systems with Multiple Time-Varying Measurement Delays. This paper presents a method for designing state observers with exponential error decay for nonlinear systems whose output measurements are affected by known time-varying delays. A modular approach is followed, where subobservers are connected in cascade to achieve a desired exponential convergence rate (chain observer). When the delay is small, a single-step observer is sufficient to carry out the goal. Two or more subobservers are needed in the the presence of large delays. The observer employs delay-dependent time-varying gains to achieve the desired exponential error decay. The proposed approach allows to deal with vector output measurements, where each output component can be affected by a different delay. Relationships among the error decay rate, the bound on the measurement delays, the observer gains, and the Lipschitz constants of the system are presented. The method is illustrated on the synchronization problem of continuous-time hyperchaotic systems with buffered measurements.
Predictor-Based Control Of Linear Systems With Large And Variable Measurement Delays This paper concerns the problem of the control of linear systems by means of feedback from delayed output, where the delay is known and time-varying. The main advantage of the approach is that it can be applied to systems with any delay bound, i.e. not only small delays. The predictor is based on a combination of finite-dimensional elementary predictors whose number can be suitably chosen to compensate any delay. The single-predictor element is an original proposal, and the class of delays to which the schema can be applied includes, but it is not limited to, continuous delay functions.
Asymptotic stability for time-variant systems and observability: Uniform and nonuniform criteria This paper presents some new criteria for uniform and nonuniform asymptotic stability of equilibria for time-variant differential equations and this within a Lyapunov approach. The stability criteria are formulated in terms of certain observability conditions with the output derived from the Lyapunov function. For some classes of systems, this system theoretic interpretation proves to be fruitful since-after establishing the invariance of observability under output injection-this enables us to check the stability criteria on a simpler system. This procedure is illustrated for some classical examples.
From Continuous-Time Design to Sampled-Data Design of Observers In this work, a sampled-data nonlinear observer is designed using a continuous-time design coupled with an inter-sample output predictor. The proposed sampled-data observer is a hybrid system. It is shown that under certain conditions, the robustness properties of the continuous-time design are inherited by the sampled-data design, as long as the sampling period is not too large. The approach is applied to linear systems and to triangular globally Lipschitz systems.
Time-Delay Compensation by Communication Disturbance Observer for Bilateral Teleoperation Under Time-Varying Delay This paper presents the effectiveness of a time-delay compensation method based on the concept of network disturbance and communication disturbance observer for bilateral teleoperation systems under time-varying delay. The most efficient feature of the compensation method is that it works without time-delay models (model-based time-delay compensation approaches like Smith predictor usually need ti...
The Emergence of Intelligent Enterprises: From CPS to CPSS When IEEE Intelligent Systems solicited ideas for a new department, cyberphysical systems(CPS) received overwhelming support.Cyber-Physical-Social Systems is the new name for CPS. CPSS is the enabling platform technology that will lead us to an era of intelligent enterprises and industries. Internet use and cyberspace activities have created an overwhelming demand for the rapid development and application of CPSS. CPSS must be conducted with a multidisciplinary approach involving the physical, social, and cognitive sciences and that Al-based intelligent systems will be key to any successful construction and deployment.
Design-oriented estimation of thermal noise in switched-capacitor circuits. Thermal noise represents a major limitation on the performance of most electronic circuits. It is particularly important in switched circuits, such as the switched-capacitor (SC) filters widely used in mixed-mode CMOS integrated circuits. In these circuits, switching introduces a boost in the power spectral density of the thermal noise due to aliasing. Unfortunately, even though the theory of nois...
A new concept for wireless reconfigurable receivers In this article we present the Self-Adaptive Universal Receiver (SAUR), a novel wireless reconfigurable receiver architecture. This scheme is based on blind recognition of the system in use, operating on a new radio interface comprising two functional phases. The first phase performs a wideband analysis (WBA) on the received signal to determine its standard. The second phase corresponds to demodulation. Here we only focus on the WBA phase, which consists of an iterative process to find the bandwidth compatible with the associated signal processing techniques. The blind standard recognition performed in the last iteration step of this process uses radial basis function neural networks. This allows a strong analogy between our approach and conventional pattern recognition problems. The efficiency of this type of blind recognition is illustrated with the results of extensive simulations performed in our laboratory using true data of received signals.
A 13-b 40-MSamples/s CMOS pipelined folding ADC with background offset trimming Two key concepts of pipelining and background offset trimming are applied to demonstrate a 13-b 40-MSamples/s CMOS analog-to-digital converter (ADC) based on the basic folding and interpolation architecture. Folding amplifier stages made of simple differential pairs are pipelined using distributed interstage track-and-holders. Background offset trimming implemented with a highly oversampling delta-sigma modulator enhances the resolution of the CMOS folders beyond 12 bits. The background offset trimming circuit continuously measures and adjusts the offsets of the folding amplifiers without interfering with the normal operation. The prototype system is further refined using subranging and digital correction, and exhibits a spurious-free dynamic range (SFDR) of 82 dB at 40 MSamples/s. The measured differential nonlinearity (DNL) and integral nonlinearity (INL) are about /spl plusmn/0.5 and /spl plusmn/2.0 LSB, respectively. The chip fabricated in 0.5-/spl mu/m CMOS occupies 8.7 mm/sup 2/ and consumes 800 mW at 5 V.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.026712
0.026732
0.02601
0.019905
0.018102
0.014149
0.005578
0.000087
0
0
0
0
0
0
Complements on phase noise analysis and design of CMOS ring oscillators This paper reports two complements on phase noise analysis and design of CMOS ring oscillators. In detail, it proposes an extension to current analytical methods for predicting flicker noise contribution to phase noise in differential CMOS ring oscillators. The results of the proposed analysis are compared with the existing methods and simulation results by SpectreRF for two differential topologies. The comparative analyses confirm that the proposed method leads to an improvement of the prediction accuracy in spite of the small increase of complexity since it only requires device dimensions in addition to the data required by existing methods. The proposed method may also be used to indicate a minimum achievable close-in phase noise in a process node. Moreover, a design approach for low phase noise inverter-based ring oscillator is proposed and tested by means of simulation. The limitations of the proposed method can be observed from this case study.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Integration of Array Antennas in Chip Package for 60-GHz Radios. This paper discusses the integration of array antennas in chip packages for highly integrated 60-GHz radios. First, we evaluate fixed-beam array antennas, showing that most of them suffer from feed network complexity and require sophisticated process techniques to achieve enhanced performance. We describe the grid array antenna and show that is a good choice for fixed-beam array antenna applicatio...
A 4-Bit, 1.6 GS/s Low Power Flash ADC, Based on Offset Calibration and Segmentation A low power 4-bit, 1.6 GS/s flash ADC is presented. A new power reduction technique which masks the unused blocks in a semi-pipeline chain of latches and encoders is introduced. The proposed circuit determines the unused blocks based on a pre-sensing of the signal. Moreover, a reference voltage generator with very low static power dissipation is used. Novel techniques to reduce the sensitivity to dynamic noise are proposed to suppress the noise effects on the reference generator. The proposed circuit reduces the power consumption by 20 percent compared to the conventional structure when a Nyquist rate OFDM signal is applied. The INL and DNL of the converter are smaller than 0.3 LSB after calibration. The converter offers 3.8 effective number of bits (ENOB) at 1.6 GS/s sampling rate with a low frequency input signal and more than 1.8 GHz effective resolution bandwidth (ERBW) at this sampling rate. The converter consumes mere 15.5 mW from a 1.8 V supply, yielding an FoM of 695 fJ/conversion.step and occupies 0.3 mm2 in a 0.18 μm standard CMOS process.
Analysis of Phase Noise in Phase/Frequency Detectors The phase noise of phase/frequency detectors can significantly raise the in-band phase noise of frequency synthesizers, corrupting the modulated signal. This paper analyzes the phase noise mechanisms in CMOS phase/frequency detectors and applies the results to two different topologies. It is shown that an octave increase in the input frequency raises the phase noise by 6 dB if flicker noise is dominant and by 3 dB if white noise is dominant. An optimization methodology is also proposed that lowers the phase noise by 4 to 8 dB for a given power consumption. Simulation and analytical results agree to within 3.1 dB for the two topologies at different frequencies.
A 12.8 GS/s Time-Interleaved ADC With 25 GHz Effective Resolution Bandwidth and 4.6 ENOB This paper presents a 12.8 GS/s 32-way hierarchically time-interleaved SAR ADC with 4.6 ENOB in 65 nm CMOS. The prototype utilizes hierarchical sampling and cascode sampler circuits to enable greater than 25 GHz 3 dB effective resolution bandwidth (ERBW). We further employ a pseudo-differential SAR ADC to save power and area. The core circuit occupies only 0.23 mm 2 and consumes a total of 162 mW from dual 1.2 V/1.1 V supplies. The design achieves a SNDR of 29.4 dB at low frequencies and 26.4 dB at 25 GHz, resulting in a figure-of-merit of 0.79 pJ/conversion-step. As will be further described in the paper, the circuit architecture used in this prototype enables expansion to 25.6 GS/s or 51.2 GS/s via additional interleaving without significantly impacting ERBW.
On the Design of Wideband Transformer-Based Fourth Order Matching Networks for E-Band Receivers in 28-nm CMOS. This paper discusses the design of on-chip transformer-based fourth order filters, suitable for mm-Wave highly sensitive broadband low-noise amplifiers (LNAs) and receivers (RXs) implemented in deep-scaled CMOS. Second order effects due to layout parasitics are analyzed and new design techniques are introduced to further enhance the gain-bandwidth product of this class of filters. The design and m...
A Low Power 6-bit Flash ADC With Reference Voltage and Common-Mode Calibration In this paper, a low power 6-bit ADC that uses reference voltage and common-mode calibration is presented. A method for adjusting the differential and common-mode reference voltages used by the ADC to improve its linearity is described. Power dissipation is reduced by using small device sizes in the ADC and relying on calibration to cancel the large non-ideal offsets due to device mismatches. The ADC occupies 0.13 mm2 in 65 nm CMOS and dissipates 12 mW at a sample rate of 800 MS/s from a 1.2 V supply.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
Scratchpad memory: design alternative for cache on-chip memory in embedded systems In this paper we address the problem of on-chip memory selection for computationally intensive applications, by proposing scratch pad memory as an alternative to cache. Area and energy for different scratch pad and cache sizes are computed using the CACTI tool while performance was evaluated using the trace results of the simulator. The target processor chosen for evaluation was AT91M40400. The results clearly establish scratehpad memory as a low power alternative in most situations with an average energy reducation of 40%. Further the average area-time reduction for the seratchpad memory was 46% of the cache memory.
Approximate counting, uniform generation and rapidly mixing Markov chains The paper studies effective approximate solutions to combinatorial counting and unform generation problems. Using a technique based on the simulation of ergodic Markov chains, it is shown that, for self-reducible structures, almost uniform generation is possible in polynomial time provided only that randomised approximate counting to within some arbitrary polynomial factor is possible in polynomial time. It follows that, for self-reducible structures, polynomial time randomised algorithms for counting to within factors of the form (1 + n − β ) are available either for all β ϵ R or for no β ϵ R . A substantial part of the paper is devoted to investigating the rate of convergence of finite ergodic Markov chains, and a simple but powerful characterisation of rapid convergence for a broad class of chains based on a structural property of the underlying graph is established. Finally, the general techniques of the paper are used to derive an almost uniform generation procedure for labelled graphs with a given degree sequence which is valid over a much wider range of degrees than previous methods: this in turn leads to randomised approximate counting algorithms for these graphs with very good asymptotic behaviour.
A theory of nonsubtractive dither A detailed mathematical investigation of multibit quantizing systems using nonsubtractive dither is presented. It is shown that by the use of dither having a suitably chosen probability density function, moments of the total error can be made independent of the system input signal but that statistical independence of the error and the input signals is not achievable. Similarly, it is demonstrated that values of the total error signal cannot generally be rendered statistically independent of one another but that their joint moments can be controlled and that, in particular, the error sequence can be rendered spectrally white. The properties of some practical dither signals are explored, and recommendations are made for dithering in audio, video, and measurement applications. The paper collects all of the important results on the subject of nonsubtractive dithering and introduces important new ones with the goal of alleviating persistent and widespread misunderstandings regarding the technique
Master Data Quality Barriers: An Empirical Investigation Purpose - The development of IT has enabled organizations to collect and store many times more data than they were able to just decades ago. This means that companies are now faced with managing huge amounts of data, which represents new challenges in ensuring high data quality. The purpose of this paper is to identify barriers to obtaining high master data quality.Design/methodology/approach - This paper defines relevant master data quality barriers and investigates their mutual importance through organizing data quality barriers identified in literature into a framework for analysis of data quality. The importance of the different classes of data quality barriers is investigated by a large questionnaire study, including answers from 787 Danish manufacturing companies.Findings - Based on a literature review, the paper identifies 12 master data quality barriers. The relevance and completeness of this classification is investigated by a large questionnaire study, which also clarifies the mutual importance of the defined barriers and the differences in importance in small, medium, and large companies.Research limitations/implications - The defined classification of data quality barriers provides a point of departure for future research by pointing to relevant areas for investigation of data quality problems. The limitations of the study are that it focuses only on manufacturing companies and master data (i.e. not transaction data).Practical implications - The classification of data quality barriers can give companies increased awareness of why they experience data quality problems. In addition, the paper suggests giving primary focus to organizational issues rather than perceiving poor data quality as an IT problem.Originality/value - Compared to extant classifications of data quality barriers, the contribution of this paper represents a more detailed and complete picture of what the barriers are in relation to data quality. Furthermore, the presented classification has been investigated by a large questionnaire study, for which reason it is founded on a more solid empirical basis than existing classifications.
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
Control Variate Approximation for DNN Accelerators In this work, we introduce a control variate approximation technique for low error approximate Deep Neural Network (DNN) accelerators. The control variate technique is used in Monte Carlo methods to achieve variance reduction. Our approach significantly decreases the induced error due to approximate multiplications in DNN inference, without requiring time-exhaustive retraining compared to state-of...
Approximate Computing: A Survey. As one of the most promising energy-efficient computing paradigms, approximate computing has gained a lot of research attention in the past few years. This paper presents a survey of state-of-the-art work in all aspects of approximate computing and highlights future research challenges in this field.
Exploiting Data Resilience in Wireless Network-on-chip Architectures AbstractThe emerging wireless Network-on-Chip (WiNoC) architectures are a viable solution for addressing the scalability limitations of manycore architectures in which multi-hop long-range communications strongly impact both the performance and energy figures of the system. The energy consumption of wired links as well as that of radio communications account for a relevant fraction of the overall energy budget. In this article, we extend the approximate computing paradigm to the case of the on-chip communication system in manycore architectures. We present techniques, circuitries, and programming interfaces aimed at reducing the energy consumption of a WiNoC by exploiting the trade-off energy saving vs. application output degradation. The proposed platform—namely, xWiNoC—uses variable voltage swing links and tunable transmitting power wireless interfaces along with a programming interface that allows the programmer to specify those data structures that are error-resilient. Thus, communications induced by the access to such error-resilient data structures are carried out by using links and radio channels that are configured to work in a low energy mode, albeit by exposing a higher bit error rate. xWiNoC is assessed on a set of applications belonging to different domains in which the trade-off energy vs. performance vs. application result quality is discussed. We found that up to 50% of communication energy saving can be obtained with a negligible impact on the application output quality and 3% in application performance degradation.
On Performance Optimization and Quality Control for Approximate-Communication-Enabled Networks-on-Chip For many applications showing error forgiveness, approximate computing is a new design paradigm that trades application output accuracy for mitigating computation/communication effort, which results in performance/energy benefit. Since networks-on-chip (NoCs) are one of the major contributors to system performance and power consumption, the underlying communication is approximated to achieve time/...
Moore's Law: what comes next? Moore's Law challenges point to changes in software.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
Planning as heuristic search In the AIPS98 Planning Contest, the hsp planner showed that heuristic search planners can be competitive with state-of-the-art Graphplan and sat planners. Heuristic search planners like hsp transform planning problems into problems of heuristic search by automatically extracting heuristics from Strips encodings. They differ from specialized problem solvers such as those developed for the 24-Puzzle and Rubik's Cube in that they use a general declarative language for stating problems and a general mechanism for extracting heuristics from these representations. In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are independent. The heuristic is then used in the context of best-first and hill-climbing search algorithms, and is tested over a large collection of domains. We then consider variations and extensions such as reversing the direction of the search for speeding node evaluation, and extracting information about propositional invariants for avoiding dead-ends. We analyze the resulting planners, evaluate their performance, and explain when they do best. We also compare the performance of these planners with two state-of-the-art planners, and show that the simplest planner based on a pure best-first search yields the most solid performance over a large set of problems. We also discuss the strengths and limitations of this approach, establish a correspondence between heuristic search planning and Graphplan, and briefly survey recent ideas that can reduce the current gap in performance between general heuristic search planners and specialized solvers.
Towards a Common API for Structured Peer-to-Peer Overlays In this paper, we describe an ongoing effort to define common APIs for structured peer-to-peer overlays and the key abstractions that can be built on them. In doing so, we hope to facilitate independent innovation in overlay protocols, services, and applications, to allow direct experimental comparisons, and to encourage application development by third parties. We provide a snapshot of our efforts and discuss open problems in an effort to solicit feedback from the research community.
Towards a higher-order synchronous data-flow language The paper introduces a higher-order synchronous data-flow language in which communication channels may themselves transport programs. This provides a mean to dynamically reconfigure data-flow processes. The language comes as a natural and strict extension of both lustre and lucy. This extension is conservative, in the sense that a first-order restriction of the language can receive the same semantics.We illustrate the expressivity of the language with some examples, before giving the formal semantics of the underlying calculus. The language is equipped with a polymorphic type system allowing types to be automatically inferred and a clock calculus rejecting programs for which synchronous execution cannot be statically guaranteed. To our knowledge, this is the first higher-order synchronous data-flow language where stream functions are first class citizens.
An almost necessary and sufficient condition for robust stability of closed-loop systems with disturbance observer The disturbance observer (DOB)-based controller has been widely employed in industrial applications due to its powerful ability to reject disturbances and compensate plant uncertainties. In spite of various successful applications, no necessary and sufficient condition for robust stability of the closed loop systems with the DOB has been reported in the literature. In this paper, we present an almost necessary and sufficient condition for robust stability when the Q-filter has a sufficiently small time constant. The proposed condition indicates that robust stabilization can be achieved against arbitrarily large (but bounded) uncertain parameters, provided that an outer-loop controller stabilizes the nominal system, and uncertain plant is of minimum phase.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
Scheduling Techniques for GPU Architectures with Processing-In-Memory Capabilities. Processing data in or near memory (PIM), as opposed to in conventional computational units in a processor, can greatly alleviate the performance and energy penalties of data transfers from/to main memory. Graphics Processing Unit (GPU) architectures and applications, where main memory bandwidth is a critical bottleneck, can benefit from the use of PIM. To this end, an application should be properly partitioned and scheduled to execute on either the main, powerful GPU cores that are far away from memory or the auxiliary, simple GPU cores that are close to memory (e.g., in the logic layer of 3D-stacked DRAM). This paper investigates two key code scheduling issues in such a GPU architecture that has PIM capabilities, to maximize performance and energy-efficiency: (1) how to automatically identify the code segments, or kernels, to be offloaded to the cores in memory, and (2) how to concurrently schedule multiple kernels on the main GPU cores and the auxiliary GPU cores in memory. We develop two new runtime techniques: (1) a regression-based affinity prediction model and mechanism that accurately identifies which kernels would benefit from PIM and offloads them to GPU cores in memory, and (2) a concurrent kernel management mechanism that uses the affinity prediction model, a new kernel execution time prediction model, and kernel dependency information to decide which kernels to schedule concurrently on main GPU cores and the GPU cores in memory. Our experimental evaluations across 25 GPU applications demonstrate that these two techniques can significantly improve both application performance (by 25% and 42%, respectively, on average) and energy efficiency (by 28% and 27%).
Toward standardized near-data processing with unrestricted data placement for GPUs 3D-stacked memory devices with processing logic can help alleviate the memory bandwidth bottleneck in GPUs. However, in order for such Near-Data Processing (NDP) memory stacks to be used for different GPU architectures, it is desirable to standardize the NDP architecture. Our proposal enables this standardization by allowing data to be spread across multiple memory stacks as is the norm in high-performance systems without an MMU on the NDP stack. The keys to this architecture are the ability to move data between memory stacks as required for computation, and a partitioned execution mechanism that offloads memory-intensive application segments onto the NDP stack and decouples address translation from DRAM accesses. By enhancing this system with a smart offload selection mechanism that is cognizant of the compute capability of the NDP and cache locality on the host processor, system performance and energy are improved by up to 66.8% and 37.6%, respectively.
Concurrent Data Structures for Near-Memory Computing. The performance gap between memory and CPU has grown exponentially. To bridge this gap, hardware architects have proposed near-memory computing (also called processing-in-memory, or PIM), where a lightweight processor (called a PIM core) is located close to memory. Due to its proximity to memory, a memory access from a PIM core is much faster than that from a CPU core. New advances in 3D integration and die-stacked memory make PIM viable in the near future. Prior work has shown significant performance improvements by using PIM for embarrassingly parallel and data-intensive applications, as well as for pointer-chasing traversals in sequential data structures. However, current server machines have hundreds of cores, and algorithms for concurrent data structures exploit these cores to achieve high throughput and scalability, with significant benefits over sequential data structures. Thus, it is important to examine how PIM performs with respect to modern concurrent data structures and understand how concurrent data structures can be developed to take advantage of PIM. This paper is the first to examine the design of concurrent data structures for PIM. We show two main results: (1) naive PIM data structures cannot outperform state-of-the-art concurrent data structures, such as pointer-chasing data structures and FIFO queues, (2) novel designs for PIM data structures, using techniques such as combining, partitioning and pipelining, can outperform traditional concurrent data structures, with a significantly simpler design.
Evolution of Memory Architecture Computer memories continue to serve the role that they first served in the electronic discrete variable automatic computer (EDVAC) machine documented by John von Neumann, namely that of supplying instructions and operands for calculations in a timely manner. As technology has made possible significantly larger and faster machines with multiple processors, the relative distance in processor cycles ...
Massively parallel skyline computation for processing-in-memory architectures Processing-In-Memory (PIM) is an increasingly popular architecture aimed at addressing the 'memory wall' crisis by prioritizing the integration of processors within DRAM. It promotes low data access latency, high bandwidth, massive parallelism, and low power consumption. The skyline operator is a known primitive used to identify those multi-dimensional points offering optimal trade-offs within a given dataset. For large multidimensional dataset, calculating the skyline is extensively compute and data intensive. Although, PIM systems present opportunities to mitigate this cost, their execution model relies on all processors operating in isolation with minimal data exchange. This prohibits direct application of known skyline optimizations which are inherently sequential, creating dependencies and large intermediate results that limit the maximum parallelism, throughput, and require an expensive merging phase. In this work, we address these challenges by introducing the first skyline algorithm for PIM architectures, called DSky. It is designed to be massively parallel and throughput efficient by leveraging a novel work assignment strategy that emphasizes load balancing. Our experiments demonstrate that it outperforms the state-of-the-art algorithms for CPUs and GPUs, in most cases. DSky achieves 2× to 14× higher throughput compared to the state-of-the-art solutions on competing CPU and GPU architectures. Furthermore, we showcase DSky's good scaling properties which are intertwined with PIM's ability to allocate resources with minimal added cost. In addition, we showcase an order of magnitude better energy consumption compared to CPUs and GPUs.
Towards a scatter-gather architecture: hardware and software issues The on-node performance of High performance computing (HPC) applications is traditionally dominated by memory operations. Put simply, memory is what these applications "do." Unfortunately, they don't do it well. Caches, our first line of attack in the battle for memory performance, often throw away most of the data they fetch before using it. Processor cores, one of our most expensive resources, spend an inordinate amount of time performing simple address computations. Addressing these issues will require new approaches to how on-chip memory is organized and how memory operations are performed. Under Project 38, a joint Department of Energy / Department of Defense architectural resarch project, we have focused on exploring what a flexible in-memory scatter-gather architecture could look like in the context of several important HPC applications.
iPIM: Programmable In-Memory Image Processing Accelerator Using Near-Bank Architecture Image processing is becoming an increasingly important domain for many applications on workstations and the datacenter that require accelerators for high performance and energy efficiency. GPU, which is the state-of-the-art accelerator for image processing, suffers from the memory bandwidth bottleneck. To tackle this bottleneck, near-bank architecture provides a promising solution due to its enormous bank-internal bandwidth and low-energy memory access. However, previous work lacks hardware programmability, while image processing workloads contain numerous heterogeneous pipeline stages with diverse computation and memory access patterns. Enabling programmable near-bank architecture with low hardware overhead remains challenging.This work proposes iPIM, the first programmable in-memory image processing accelerator using near-bank architecture. We first design a decoupled control-execution architecture to provide lightweight programmability support. Second, we propose the SIMB (Single-Instruction-Multiple-Bank) ISA to enable flexible control flow and data access. Third, we present an end-to-end compilation flow based on Halide that supports a wide range of image processing applications and maps them to our SIMB ISA. We further develop iPIM-aware compiler optimizations, including register allocation, instruction reordering, and memory order enforcement to improve performance. We evaluate a set of representative image processing applications on iPIM and demonstrate that on average iPIM obtains 11.02× acceleration and 79.49% energy saving over an NVIDIA Tesla V100 GPU. Further analysis shows that our compiler optimizations contribute 3.19× speedup over the unoptimized baseline.
NDC: Analyzing the impact of 3D-stacked memory+logic devices on MapReduce workloads While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.
NAND-Net: Minimizing Computational Complexity of In-Memory Processing for Binary Neural Networks Popular deep learning technologies suffer from memory bottlenecks, which significantly degrade the energy-efficiency, especially in mobile environments. In-memory processing for binary neural networks (BNNs) has emerged as a promising solution to mitigate such bottlenecks, and various relevant works have been presented accordingly. However, their performances are severely limited by the overheads induced by the modification of the conventional memory architectures. To alleviate the performance degradation, we propose NAND-Net, an efficient architecture to minimize the computational complexity of in-memory processing for BNNs. Based on the observation that BNNs contain many redundancies, we decomposed each convolution into sub-convolutions and eliminated the unnecessary operations. In the remaining convolution, each binary multiplication (bitwise XNOR) is replaced by a bitwise NAND operation, which can be implemented without any bit cell modifications. This NAND operation further brings an opportunity to simplify the subsequent binary accumulations (popcounts). We reduced the operation cost of those popcounts by exploiting the data patterns of the NAND outputs. Compared to the prior state-of-the-art designs, NAND-Net achieves 1.04-2.4x speedup and 34-59% energy saving, thus making it a suitable solution to implement efficient in-memory processing for BNNs.
A domain-specific architecture for deep neural networks. Tensor processing units improve performance per watt of neural networks in Google datacenters by roughly 50x.
Bayesian learning in social networks We extend the standard model of social learning in two ways. First, we introduce a social network and assume that agents can only observe the actions of agents to whom they are connected by this network. Secondly, we allow agents to choose a different action at each date. If the network satisfies a connectedness assumption, the initial diversity resulting from diverse private information is eventually replaced by uniformity of actions, though not necessarily of beliefs, in finite time with probability one. We look at particular networks to illustrate the impact of network architecture on speed of convergence and the optimality of absorbing states. Convergence is remarkably rapid, so that asymptotic results are a good approximation even in the medium run.
Implementing unreliable failure detectors with unknown membership
A Primer on Hardware Security: Models, Methods, and Metrics The multinational, distributed, and multistep nature of integrated circuit (IC) production supply chain has introduced hardware-based vulnerabilities. Existing literature in hardware security assumes ad hoc threat models, defenses, and metrics for evaluation, making it difficult to analyze and compare alternate solutions. This paper systematizes the current knowledge in this emerging field, including a classification of threat models, state-of-the-art defenses, and evaluation metrics for important hardware-based attacks.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.01896
0.018619
0.018531
0.018182
0.018182
0.018182
0.014545
0.007661
0.000279
0.000011
0
0
0
0
Handwritten digit recognition: applications of neural network chips and automatic learning Two novel methods for achieving handwritten digit recognition are described. The first method is based on a neural network chip that performs line thinning and feature extraction using local template matching. The second method is implemented on a digital signal processor and makes extensive use of constrained automatic learning. Experimental results obtained using isolated handwritten digits taken from postal zip codes, a rather difficult data set, are reported and discussed
Compiler algorithms for synchronization Translating program loops into a parallel form is one of the most important transformations performed by concurrentizing compilers. This transformation often requires the insertion of synchronization instructions within the body of the concurrent loop. Several loop synchronization techniques are presented first. Compiler algorithms to generate synchronization instructions for singly-nested loops are then discussed. Finally, a technique for the elimination of redundant synchronization instructions is presented.
A Software Scheme for Multithreading on CGRAs Recent industry trends show a drastic rise in the use of hand-held embedded devices, from everyday applications to medical (e.g., monitoring devices) and critical defense applications (e.g., sensor nodes). The two key requirements in the design of such devices are their processing capabilities and battery life. There is therefore an urgency to build high-performance and power-efficient embedded devices, inspiring researchers to develop novel system designs for the same. The use of a coprocessor (application-specific hardware) to offload power-hungry computations is gaining favor among system designers to suit their power budgets. We propose the use of CGRAs (Coarse-Grained Reconfigurable Arrays) as a power-efficient coprocessor. Though CGRAs have been widely used for streaming applications, the extensive compiler support required limits its applicability and use as a general purpose coprocessor. In addition, a CGRA structure can efficiently execute only one statically scheduled kernel at a time, which is a serious limitation when used as an accelerator to a multithreaded or multitasking processor. In this work, we envision a multithreaded CGRA where multiple schedules (or kernels) can be executed simultaneously on the CGRA (as a coprocessor). We propose a comprehensive software scheme that transforms the traditionally single-threaded CGRA into a multithreaded coprocessor to be used as a power-efficient accelerator for multithreaded embedded processors. Our software scheme includes (1) a compiler framework that integrates with existing CGRA mapping techniques to prepare kernels for execution on the multithreaded CGRA and (2) a runtime mechanism that dynamically schedules multiple kernels (offloaded from the processor) to execute simultaneously on the CGRA coprocessor. Our multithreaded CGRA coprocessor implementation thus makes it possible to achieve improved power-efficient computing in modern multithreaded embedded systems.
Domain Specialization Is Generally Unnecessary for Accelerators. Domain-specific accelerators (DSAs), which sacrifice programmability for efficiency, are a reaction to the waning benefits of device scaling. This article demonstrates that there are commonalities between DSAs that can be exploited with programmable mechanisms. The goals are to create a programmable architecture that can match the benefits of a DSA and to create a platform for future accelerator i...
PathSeeker: A Fast Mapping Algorithm for CGRAs Coarse-grained reconfigurable arrays (CGRAs) have gained traction over the years as a low-power accelerator due to the efficient mapping of the compute-intensive loops onto the 2-D array by the CGRA compiler. When encountering a mapping failure for a given node, existing mapping techniques either exit and retry the mapping anew, or perform backtracking, i.e., recursively remove the previously mapped node to find a valid mapping. Abandoning mapping and starting afresh can deteriorate the quality of mapping and the compilation time. Even backtracking may not be the best choice since the previous node may not be the incorrectly placed node. To tackle this issue, we propose PathSeeker - a mapping approach that analyzes mapping failures and performs local adjustments to the schedule to obtain a mapping. Experimental results on 35 top performance-critical loops from MiBench, Rodinia, and Parboil benchmark suites demonstrate that PathSeeker can map all of them with better mapping quality and dramatically less compilation time than the previous state-of-the-art approaches - GraphMinor and RAMP, which were unable to map 20 and 5 loops, respectively. Over these benchmarks, PathSeeker achieves 28% better performance at 550x compilation speedup over GraphMinor and 3% better performance at 10x compilation speedup over RAMP on a 4x4 CGRA.
Hierarchical reconfigurable computing arrays for efficient CGRA-based embedded systems Coarse-grained reconfigurable architecture (CGRA) based embedded system aims at achieving high system performance with sufficient flexibility to map variety of applications. However, significant area and power consumption in the arrays prohibits its competitive advantage to be used as a processing core. In this work, we propose hierarchical reconfigurable computing array architecture to reduce power/area and enhance performance in configurable embedded system. The CGRA-based embedded systems that consist of hierarchical configurable computing arrays with varying size and communication speed were examined for multimedia and other applications. Experimental results show that the proposed approach reduces on-chip area by 22%, execution time by up to 72% and reduces power consumption by up to 55% when compared with the conventional CGRA-based architectures.
Fifer: Practical Acceleration of Irregular Applications on Reconfigurable Architectures ABSTRACTCoarse-grain reconfigurable arrays (CGRAs) can achieve much higher performance and efficiency than general-purpose cores, approaching the performance of a specialized design while retaining programmability. Unfortunately, CGRAs have so far only been effective on applications with regular compute patterns. However, many important workloads like graph analytics, sparse linear algebra, and databases, are irregular applications with unpredictable access patterns and control flow. Since CGRAs map computation statically to a spatial fabric of functional units, irregular memory accesses and control flow cause frequent stalls and load imbalance. We present Fifer, an architecture and compilation technique that makes irregular applications efficient on CGRAs. Fifer first decouples irregular applications into a feed-forward network of pipeline stages. Each resulting stage is regular and can efficiently use the CGRA fabric. However, irregularity causes stages to have widely varying loads, resulting in high load imbalance if they execute spatially in a conventional CGRA. Fifer solves this by introducing dynamic temporal pipelining: it time-multiplexes multiple stages onto the same CGRA, and dynamically schedules stages to avoid load imbalance. Fifer makes time-multiplexing fast and cheap to quickly respond to load imbalance while retaining the efficiency and simplicity of a CGRA design. We show that Fifer improves performance by gmean 2.8 × (and up to 5.5 ×) over a conventional CGRA architecture (and by gmean 17 × over an out-of-order multicore) on a variety of challenging irregular applications.
Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
STDP-Based Pruning of Connections and Weight Quantization in Spiking Neural Networks for Energy-Efficient Recognition Spiking neural networks (SNNs) with a large number of weights and varied weight distribution can be difficult to implement in emerging in-memory computing hardware due to the limitations on crossbar size (implementing dot product), the constrained number of conductance states in non-CMOS devices and the power budget. We present a sparse SNN topology where noncritical connections are pruned to reduce the network size, and the remaining critical synapses are weight quantized to accommodate for limited conductance states. Pruning is based on the power law weight-dependent spike timing dependent plasticity model; synapses between pre- and post-neuron with high spike correlation are retained, whereas synapses with low correlation or uncorrelated spiking activity are pruned. The weights of the retained connections are quantized to the available number of conductance states. The process of pruning noncritical connections and quantizing the weights of critical synapses is performed at regular intervals during training. We evaluated our sparse and quantized network on MNIST dataset and on a subset of images from Caltech-101 dataset. The compressed topology achieved a classification accuracy of 90.1% (91.6%) on the MNIST (Caltech-101) dataset with 3.1X (2.2X) and 4X (2.6X) improvement in energy and area, respectively. The compressed topology is energy and area efficient while maintaining the same classification accuracy of a 2-layer fully connected SNN topology.
SimFlex: Statistical Sampling of Computer System Simulation Timing-accurate full-system multiprocessor simulations can take years because of architecture and application complexity. Statistical sampling makes simulation-based studies feasibly by providing ten-thousand-fold reductions in simulation runtime and enabling thousand-way simulation parallelism.
Language-based information-flow security Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.
Active Damping In Dc/Dc Power Electronic Converters: A Novel Method To Overcome The Problems Of Constant Power Loads Multi-converter power electronic systems exist in land, sea, air, and space vehicles. In these systems, load converters exhibit constant power load (CPL) behavior for the feeder converters and tend to destabilize the system. In this paper, the implementation of novel active-damping techniques on dc/dc converters has been shown. Moreover, the proposed active-damping method is used to overcome the negative impedance instability problem caused by the CPLs. The effectiveness of the new proposed approach has been verified by PSpice simulations and experimental results.
Hybrid Forward and Backward Threshold-Compensated RF-DC Power Converter for RF Energy Harvesting This paper presents a hybrid forward and backward threshold voltage compensated radio-frequency to direct current (RF-to-DC) power conversion circuit for RF energy harvesting applications. The proposed circuit uses standard p-channel metal-oxide semiconductor transistors in all the stages except for the first few stages to allow individual body biasing eliminating the need for triple-well technology in the previously reported forward compensation schemes. Two different RF-DC power conversion circuits, one optimized to provide high power conversion efficiency (PCE) and the other to produce a large output DC voltage harvested from extremely low input power levels, are designed and fabricated in IBM's 0.13 μm complementary metal-oxide-semiconductor technology. The first circuit exhibits a measured maximum PCE of 22.6% at -16.8 dBm (20.9 μW) and produces 1 V across a 1 MΩ load from a remarkably low input power level of -21.6 dBm (6.9 μW) while the latter circuit produces 2.8 V across a 1 MΩ load from a peak-to-peak input voltage of 170 mV achieving a voltage multiplication ratio of 17. Also, design strategies are developed to enhance the output DC voltage and to optimize the PCE of threshold voltage compensated voltage multiplier.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.050975
0.05
0.05
0.05
0.05
0.025
0.016667
0.001561
0.000147
0.000005
0
0
0
0
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
Theory and Implementation of an Analog-to-Information Converter using Random Demodulation The new theory of compressive sensing enables direct analog-to-information conversion of compressible signals at sub-Nyquist acquisition rates. The authors develop new theory, algorithms, performance bounds, and a prototype implementation for an analog-to-information converter based on random demodulation. The architecture is particularly apropos for wideband signals that are sparse in the time-frequency plane. End-to-end simulations of a complete transistor-level implementation prove the concept under the effect of circuit nonidealities.
Ultra-High Input Impedance, Low Noise Integrated Amplifier for Noncontact Biopotential Sensing Noncontact electrocardiogram/electroencephalogram/electromyogram electrodes, which operate primarily through capacitive coupling, have been extensively studied for unobtrusive physiological monitoring. Previous implementations using discrete off-the-shelf amplifiers have been encumbered by the need for manually tuned input capacitance neutralization networks and complex dc-biasing schemes. We have designed and fabricated a custom integrated noncontact sensor front-end amplifier that fully bootstraps internal and external parasitic impedances. DC stability without the need for external large valued resistances is ensured by an ac bootstrapped, low-leakage, on-chip biasing network. The amplifier achieves, without neutralization, input impedance of 60 fF 50 T , input referred noise of 0.05 fA/ and 200 nV/ at 1 Hz, and power consumption of 1.5 A per channel at 3.3 V supply voltage. Stable frequency response is demonstrated below 0.05 Hz with electrode coupling capacitances as low as 0.5 pF.
A high input impedance low-noise instrumentaion amplifier with JFET input This paper presents a high input impedance instrumentation amplifier with low-noise low-power operation. JFET input-pair is employed instead of CMOS to significantly reduce the flicker noise. This amplifier features high input impedance (15.3 GΩ∥1.39 pF) by using current feedback technique and JFET input. This amplifier has a mid-band gain of 39.9 dB, and draws 3.65 μA from a 2.8-V supply and exhibits an input-referred noise of 3.81 μVrms integrated from 10 mHz to 100 kHz, corresponding to a noise efficiency factor (NEF) of 3.23.
A 0.5–1.1-V Adaptive Bypassing SAR ADC Utilizing the Oscillation-Cycle Information of a VCO-Based Comparator A successive approximation register (SAR) analog-to-digital converter (ADC) with a voltage-controlled oscillator (VCO)-based comparator is presented in this paper. The relationship between the input voltage and the number of oscillation cycles (NOC) to reach a VCO-comparator decision is explored, implying an inherent coarse quantization in parallel with the normal comparison. The NOC as a design parameter is introduced and analyzed with noise, metastability, and tradeoff considerations. The NOC is exploited to bypass a certain number of SAR cycles for higher power efficiency of VCO-based SAR ADCs. To cope with the process, voltage, and temperature (PVT) variations, an adaptive bypassing technique is proposed, tracking and correcting window sizes in the background. Fabricated in a 40-nm CMOS process, the ADC achieves a peak effective number of bits of 9.71 b at 10 MS/s. Walden figure of merit (FoM) of 2.4–6.85 fJ/conv.-step is obtained over a wide range of supply voltages and sampling rates. Measurement has been carried out under typical, fast-fast, and slow-slow process corners and 0 °C–100 °C temperature range, showing that the proposed ADC is robust over PVT variations without any off-chip calibration or tuning.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Building efficient wireless sensor networks with low-level naming In most distributed systems, naming of nodes for low-level communication leverages topological location (such as node addresses) and is independent of any application. In this paper, we investigate an emerging class of distributed systems where low-level communication does not rely on network topological location. Rather, low-level communication is based on attributes that are external to the network topology and relevant to the application. When combined with dense deployment of nodes, this kind of named data enables in-network processing for data aggregation, collaborative signal processing, and similar problems. These approaches are essential for emerging applications such as sensor networks where resources such as bandwidth and energy are limited. This paper is the first description of the software architecture that supports named data and in-network processing in an operational, multi-application sensor-network. We show that approaches such as in-network aggregation and nested queries can significantly affect network traffic. In one experiment aggregation reduces traffic by up to 42% and nested queries reduce loss rates by 30%. Although aggregation has been previously studied in simulation, this paper demonstrates nested queries as another form of in-network processing, and it presents the first evaluation of these approaches over an operational testbed.
Formal verification in hardware design: a survey In recent years, formal methods have emerged as an alternative approach to ensuring the quality and correctness of hardware designs, overcoming some of the limitations of traditional validation techniques such as simulation and testing.There are two main aspects to the application of formal methods in a design process: the formal framework used to specify desired properties of a design and the verification techniques and tools used to reason about the relationship between a specification and a corresponding implementation. We survey a variety of frameworks and techniques proposed in the literature and applied to actual designs. The specification frameworks we describe include temporal logics, predicate logic, abstraction and refinement, as well as containment between &ohgr;-regular languages. The verification techniques presented include model checking, automata-theoretic techniques, automated theorem proving, and approaches that integrate the above methods.In order to provide insight into the scope and limitations of currently available techniques, we present a selection of case studies where formal methods were applied to industrial-scale designs, such as microprocessors, floating-point hardware, protocols, memory subsystems, and communications hardware.
Exploiting availability prediction in distributed systems Loosely-coupled distributed systems have significant scale and cost advantages over more traditional architectures, but the availability of the nodes in these systems varies widely. Availability modeling is crucial for predicting per-machine resource burdens and understanding emergent, system-wide phenomena. We present new techniques for predicting availability and test them using traces taken from three distributed systems. We then describe three applications of availability prediction. The first, availability-guided replica placement, reduces object copying in a distributed data store while increasing data availability. The second shows how availability prediction can improve routing in delay-tolerant networks. The third combines availability prediction with virus modeling to improve forecasts of global infection dynamics.
Chameleon: a dual-mode 802.11b/Bluetooth receiver system design In this paper, an approach to map the Bluetooth and 802.11b standards specifications into an architecture and specifications for the building blocks of a dual-mode direct conversion receiver is proposed. The design procedure focuses on optimizing the performance in each operating mode while attaining an efficient dual-standard solution. The impact of the expected receiver nonidealities and the characteristics of each building block are evaluated through bit-error-rate simulations. The proposed receiver design is verified through a fully integrated implementation from low-noise amplifier to analog-to-digital converter using IBM 0.25-μm BiCMOS technology. Experimental results from the integrated prototype meet the specifications from both standards and are in good agreement with the target sensitivity.
An efficient low-cost fixed-point digital down converter with modified filter bank In radar system, as the most important part of IF radar receiver, digital down converter (DDC) extracts the baseband signal needed from modulated IF signal, and down-samples the signal with decimation factor of 20. This paper proposes an efficient low-cost structure of DDC, including NCO, mixer and a modified filter bank. The modified filter bank adopts a high-efficiency structure, including a 5-stage CIC filter, a 9-tap CFIR filter and a 15-tap HB filter, which reduces the complexity and cost of implementation compared with the traditional filter bank. Then an optimized fixed-point programming is designed in order to implement DDC on fixed-point DSP or FPGA. The simulation results show that the proposed DDC achieves an expectant specification in application of IF radar receiver.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
DKS (N, k, f): A Family of Low Communication, Scalable and Fault-Tolerant Infrastructures for P2P Applications In this paper, we present DKS (N, k, f), a family ofinfrastructures for building Peer-To-Peer applications.Each instance of DKS (N, k, f) is a fully decentralizedoverlay network characterized by three parameters: Nthe maximum number of nodes that can be in the network; k the search arity within the network and f thedegree of fault-tolerance. Once these parameters are instantiated, the resulting network has several desirableproperties. The first property, which is the main contribution of this paper, is that there is no separate procedurefor maintaining routing tables; instead, any out-of-dateor erroneous routing entry is eventually corrected on-the-fly thereby, eliminating unnecessary bandwidth consumption. The second property is that each lookup request is resolved in at most logk(N) overlay hops under normal operations. Third, each node maintains only(k - 1)logk(N) + 1 addresses of other nodes for routingpurposes.Fourth, new nodes can join and existingnodes can leave at will with a negligible disturbance tothe ability to resolve lookups in logk(N) hops in average.Fifth, any pair key/value that is inserted into the systemis guaranteed to be located even in the presence ofconcurrent joins.Sixth, even if f consecutive nodes failsimultaneously, correct lookup is still guaranteed.
Cross Component Optimisation in a High Level Category-Based Language High level programming languages offer many benefits in terms of ease of use, encapsulation etc. However, they historically suffer from poor performance. In this paper we investigate improving the performance of a numerical code written in a high-level language by using cross-component optimisation. We compare the results with traditional approaches such as the use of high performance libraries or Fortran. We demonstrate that our cross-component optimisation is highly effective, with a speed-up of up to 1.43 over a program augmented with calls to the ATLAS BLAS library, and 1.5 over a pure Fortran equivalent.
A Peer-to-Peer Approach to Enhance Middleware Connectivity One of the problems of middleware for shared state is that they are designed, explicitly or implicitly, for symmetric networks. However, since the Internet is not symmetric, end-to-end process connectivity cannot be guaranteed. Our solution to this is to provide the middleware with a network abstraction layer that masks the asymmetry of the network and provides the illusion of a symmetric network. We describe the communication service of our middleware, the Distribution Subsystem (DSS), which carefully separates connections to remote processes from the protocols that communicate over them. This separation is used to plug-in a peer-to-peer module to provide symmetric and persistent connectivity. The P2P module can provide both up-to-date addresses for mobile processes as well as route discovery to overcome asymmetric links.
Improving the Scalability of Logarithmic-Degree DHT-Based Peer-to-Peer Networks High scalability in Peer-to-Peer (P2P) systems has been achieved with the emergence of the networks based on Distributed Hash Table (DHT). Most of the DHTs can be regarded as exponential networks. Their network size evolves exponentially while the minimal distance between two nodes as well as the routing table size, i.e., the degree, at each node evolve linearly or remain constant. In this paper we present a model to better characterize most of the current logarithmic-degree DHTs. We express them in terms of absolute and relative exponential structured networks. In relative exponential networks, such as Chord, where all nodes are reachable in at most H hops, the number of paths of length inferior or equal to H between two nodes grows exponentially with the network size. We propose the Tango approach to reduce this redundancy and to improve other properties such as reducing the lookup path length. We analyze Tango and show that it is more scalable than the current logarithmic-degree DHTs. Given its scalability and structuring flexibility, we chose Tango to be the algorithm underlying our P2P middleware.
Beernet: Building Self-Managing Decentralized Systems with Replicated Transactional Storage Distributed systems with a centralized architecture present the well known problems of single point of failure and single point of congestion; therefore, they do not scale. Decentralized systems, especially as peer-to-peer networks, are gaining popularity because they scale well, and do not need a server to work. However, their complexity is higher due to the lack of a single point of control and synchronization, and because consistent decentralized storage is difficult to maintain when data constantly evolves. Self-management is a way of handling this higher complexity. In this paper, the authors present a decentralized system built with a structured overlay network that is self-organized and self-healing, providing a transactional replicated storage for small or large scale systems.
Using the complementary nature of node joining and leaving to handle churn problem in P2P networks Churn is a basic and inherent problem in P2P networks. A lot of relevant studies have been carried out, but all lack versatility. In this paper, a general solution is proposed which makes a peer-to-peer (P2P) network need not pay much attention to churn problem by introducing a logic layer named Dechurn, and most of churn could be eliminated in the Dechurn layer. For utilizing the complementary nature of node joining and leaving, a network scheme, named Constellation, for handling churn is designed on the Dechurn layer through which the resources cached in a node for its spouse node who has left network would be succeeded by a node in latent period. The simulation results indicate that the proposed solution is effective and efficient in handling churn and easy to put into practice.
Designing Less-Structured P2P Systems for the Expected High Churn We address the problem of highly transient populations in unstructured and loosely structured peer-to-peer (P2P) systems. We propose a number of illustrative query-related strategies and organizational protocols that, by taking into consideration the expected session times of peers (their lifespans), yield systems with performance characteristics more resilient to the natural instability of their environments. We first demonstrate the benefits of lifespan-based organizational protocols in terms of end-application performance and in the context of dynamic and heterogeneous Internet environments. We do this using a number of currently adopted and proposed query-related strategies, including methods for query distribution, caching, and replication. We then show, through trace-driven simulation and wide-area experimentation, the performance advantages of lifespan-based, query-related strategies when layered over currently employed and lifespan-based organizational protocols. While merely illustrative, the evaluated strategies and protocols clearly demonstrate the advantages of considering peers' session time in designing widely-deployed P2P systems.
Gossip-based Reputation Aggregation for Unstructured Peer-to-Peer Networks Peer-to-peer (P2P) reputation systems are needed to evaluate the trustworthiness of participating peers and to combat selfish and malicious peer behaviors. The reputation system collects locally generated peer feedbacks and aggregates them to yield global reputation scores. Development of decentralized reputation system is in great demand for unstructured P2P networks since most P2P applications on the Internet are unstructured. In the absence of fast hashing and searching mechanisms, how to perform efficient reputation aggregation is a major challenge on unstructured P2P computing. We propose a novel reputation aggregation scheme called GossipTrust. This system computes global reputation scores of all nodes concurrently. By resorting to a gossip protocol and leveraging the power nodes, GossipTrust is adapted to peer dynamics and robust to disturbance by malicious peers. Simulation experiments demonstrate the system as scalable, accurate, robust and fault-tolerant. These results prove the claimed advantages in low aggregation overhead, storage efficiency, and scoring accuracy in unstructured P2P networks. With minor modifications, the system is also applicable to structured P2P systems with projected better performance.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Wide-Band CMOS Low-Noise Amplifier Exploiting Thermal Noise Canceling Known elementary wide-band amplifiers suffer from a fundamental tradeoff between noise figure (NF) and source impedance matching, which limits the NF to values typically above 3 dB. Global negative feedback can be used to break this tradeoff, however, at the price of potential instability. In contrast, this paper presents a feedforward noise-canceling technique, which allows for simultaneous noise...
Information Spreading in Stationary Markovian Evolving Graphs Markovian evolving graphs are dynamic-graph models where the links among a fixed set of nodes change during time according to an arbitrary Markovian rule. They are extremely general and they can well describe important dynamic-network scenarios. We study the speed of information spreading in the stationary phase by analyzing the completion time of the flooding mechanism. We prove a general theorem that establishes an upper bound on flooding time in any stationary Markovian evolving graph in terms of its node-expansion properties. We apply our theorem in two natural and relevant cases of such dynamic graphs. Geometric Markovian evolving graphs where the Markovian behaviour is yielded by n mobile radio stations, with fixed transmission radius, that perform independent random walks over a square region of the plane. Edge-Markovian evolving graphs where the probability of existence of any edge at time t depends on the existence (or not) of the same edge at time t-1. In both cases, the obtained upper bounds hold with high probability and they are nearly tight. In fact, they turn out to be tight for a large range of the values of the input parameters. As for geometric Markovian evolving graphs, our result represents the first analytical upper bound for flooding time on a class of concrete mobile networks.
A Linear Permanent-Magnet Motor for Active Vehicle Suspension Traditionally, automotive suspension designs with passive components have been a compromise between the three conflicting demands of road holding, load carrying, and passenger comfort. Linear electromagnetic motor-based active suspension has superior controllability and bandwidth, provides shock load isolation between the vehicle chassis and wheel, and, therefore, has great potential. It also has the ability to recover energy that is dissipated in the shock absorber in the passive systems and results in a much more energy-efficient suspension system. This paper describes the issues pertinent to the design of a high force density tubular permanent-magnet (PM) motor for active suspension in terms of performance optimization, the use of a solid stator core for low-cost production and its impact on thrust force, and the assessment of demagnetization risk.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.023222
0.030071
0.030071
0.022179
0.017624
0.016905
0.012661
0.002619
0
0
0
0
0
0
Prefetch Side-Channel Attacks: Bypassing SMAP and Kernel ASLR. Modern operating systems use hardware support to protect against control-flow hijacking attacks such as code-injection attacks. Typically, write access to executable pages is prevented and kernel mode execution is restricted to kernel code pages only. However, current CPUs provide no protection against code-reuse attacks like ROP. ASLR is used to prevent these attacks by making all addresses unpredictable for an attacker. Hence, the kernel security relies fundamentally on preventing access to address information. We introduce Prefetch Side-Channel Attacks, a new class of generic attacks exploiting major weaknesses in prefetch instructions. This allows unprivileged attackers to obtain address information and thus compromise the entire system by defeating SMAP, SMEP, and kernel ASLR. Prefetch can fetch inaccessible privileged memory into various caches on Intel x86. It also leaks the translation-level for virtual addresses on both Intel x86 and ARMv8-A. We build three attacks exploiting these properties. Our first attack retrieves an exact image of the full paging hierarchy of a process, defeating both user space and kernel space ASLR. Our second attack resolves virtual to physical addresses to bypass SMAP on 64-bit Linux systems, enabling ret2dir attacks. We demonstrate this from unprivileged user programs on Linux and inside Amazon EC2 virtual machines. Finally, we demonstrate how to defeat kernel ASLR on Windows 10, enabling ROP attacks on kernel and driver binary code. We propose a new form of strong kernel isolation to protect commodity systems incuring an overhead of only 0.06-5.09%.
MicroScope: enabling microarchitectural replay attacks The popularity of hardware-based Trusted Execution Environments (TEEs) has recently skyrocketed with the introduction of Intel's Software Guard Extensions (SGX). In SGX, the user process is protected from supervisor software, such as the operating system, through an isolated execution environment called an enclave. Despite the isolation guarantees provided by TEEs, numerous microarchitectural side channel attacks have been demonstrated that bypass their defense mechanisms. But, not all hope is lost for defenders: many modern fine-grain, high-resolution side channels---e.g., execution unit port contention---introduce large amounts of noise, complicating the adversary's task to reliably extract secrets. In this work, we introduce Microarchitectural Replay Attacks, whereby an SGX adversary can denoise nearly arbitrary microarchitectural side channels in a single run of the victim, by causing the victim to repeatedly replay on a page faulting instruction. We design, implement, and demonstrate our ideas in a framework, called MicroScope, and use it to denoise notoriously noisy side channels. Our main result shows how MicroScope can denoise the execution unit port contention channel. Specifically, we show how Micro-Scope can reliably detect the presence or absence of as few as two divide instructions in a single logical run of the victim program. Such an attack could be used to detect subnormal input to individual floating-point instructions, or infer branch directions in an enclave despite today's countermeasures that flush the branch predictor at the enclave boundary. We also use MicroScope to single-step and denoise a cache-based attack on the OpenSSL implementation of AES. Finally, we discuss the broader implications of microarchitectural replay attacks---as well as discuss other mechanisms that can cause replays.
CheckMate - Automated Synthesis of Hardware Exploits and Security Litmus Tests. Recent research has uncovered a broad class of security vulnerabilities in which confidential data is leaked through programmer-observable microarchitectural state. In this paper, we present CheckMate, a rigorous approach and automated tool for determining if a microarchitecture is susceptible to specified classes of security exploits, and for synthesizing proof-of-concept exploit code when it is. Our approach adopts "microarchitecturally happens-before" (μhb) graphs which prior work designed to capture the subtle orderings and interleavings of hardware execution events when programs run on a microarchitecture. CheckMate extends μhb graphs to facilitate modeling of security exploit scenarios and hardware execution patterns indicative of classes of exploits. Furthermore, it leverages relational model finding techniques to enable automated exploit program synthesis from microarchitecture and exploit pattern specifications. As a case study, we use CheckMate to evaluate the susceptibility of a speculative out-of-order processor to Flush+Reload cache side-channel attacks. The automatically synthesized results are programs representative of Meltdown and Spectre attacks. We then evaluate the same processor on its susceptibility to a different timing side-channel attack: Prime+Probe. Here, Check-Mate synthesized new exploits that are similar to Meltdown and Spectre in that they leverage speculative execution, but unique in that they exploit distinct microarchitectural behaviors---speculative cache line invalidations rather than speculative cache pollution---to form a side-channel. Most importantly, our results validate the CheckMate approach to formal hardware security verification and the ability of the CheckMate tool to detect real-world vulnerabilities.
A Benchmark Suite for Evaluating Caches' Vulnerability to Timing Attacks Based on improvements to an existing three-step model for cache timing-based attacks, this work presents 88 Strong types of theoretical timing-based vulnerabilities in processor caches. It also presents and implements a new benchmark suite that can be used to test if processor cache is vulnerable to one of the attacks. In total, there are 1094 automatically-generated test programs which cover the 88 Strong theoretical vulnerabilities. The benchmark suite generates the Cache Timing Vulnerability Score (CTVS) which can be used to evaluate how vulnerable a specific cache implementation is to different attacks. A smaller CTVS means the design is more secure. Evaluation is conducted on commodity Intel and AMD processors and shows how the differences in processor implementations can result in different types of attacks that they are vulnerable to. Further, the benchmarks and the CTVS can be used in simulation to help designers of new secure processors and caches evaluate their designs' susceptibility to cache timing-based attacks.
Phantomcache: Obfuscating Cache Conflicts With Localized Randomization Cache conflicts due to deterministic memory-to-cache mapping have long been exploited to leak sensitive information such as secret keys. While randomized mapping is fully investigated for L1 caches, it still remains unresolved about how to secure a much larger last-level cache (LLC). Recent solutions periodically change the mapping strategy to disrupt the crafting of conflicted addresses, which is a critical attack procedure to exploit cache conflicts. Remapping, however, increases both miss rate and access latency. We present PhantomCache for securing an LLC with remapping-free randomized mapping. We propose a localized randomization technique to bound randomized mapping of a memory address within only a limited number of cache sets. The small randomization space offers fast set search over an LLC in a memory access. The intrinsic randomness still suffices to obfuscate conflicts and disrupt efficient exploitation of conflicted addresses. We evaluate PhantomCache against an attacker exploring the state-of-the-art attack with linear-complexity. To secure an 8-bank 16 MB 16-way LLC, PhantomCache confines randomization space of an address within 8 sets and brings only 1.20% performance degradation on individual benchmarks, 0.50% performance degradation on mixed workloads, and 0.50% storage overhead per cache line, which are 2x and 9x more efficient than the state-of-the-art solutions. Moreover, PhantomCache is solely an architectural solution and requires no software change.
Randomized Last-Level Caches Are Still Vulnerable to Cache Side-Channel Attacks! But We Can Fix It Cache randomization has recently been revived as a promising defense against conflict-based cache side-channel attacks. As two of the latest implementations, CEASER-S and ScatterCache both claim to thwart conflict-based cache side-channel attacks using randomized skewed caches. Unfortunately, our experiments show that an attacker can easily find a usable eviction set within the chosen remap period...
The Spy in the Sandbox: Practical Cache Attacks in JavaScript and their Implications We present a micro-architectural side-channel attack that runs entirely in the browser. In contrast to previous work in this genre, our attack does not require the attacker to install software on the victim's machine; to facilitate the attack, the victim needs only to browse to an untrusted webpage that contains attacker-controlled content. This makes our attack model highly scalable, and extremely relevant and practical to today's Web, as most desktop browsers currently used to access the Internet are affected by such side channel threats. Our attack, which is an extension to the last-level cache attacks of Liu et al., allows a remote adversary to recover information belonging to other processes, users, and even virtual machines running on the same physical host with the victim web browser. We describe the fundamentals behind our attack, and evaluate its performance characteristics. In addition, we show how it can be used to compromise user privacy in a common setting, letting an attacker spy after a victim that uses private browsing. Defending against this side channel is possible, but the required countermeasures can exact an impractical cost on benign uses of the browser.
Streamline: a fast, flushless cache covert-channel attack by enabling asynchronous collusion ABSTRACTCovert-channel attacks exploit contention on shared hardware resources such as processor caches to transmit information between colluding processes on the same system. In recent years, covert channels leveraging cacheline-flush instructions, such as Flush+Reload and Flush+Flush, have emerged as the fastest cross-core attacks. However, current attacks are limited in their applicability and bit-rate not due to any fundamental hardware limitations, but due to their protocol design requiring flush instructions and tight synchronization between sender and receiver, where both processes synchronize every bit-period to maintain low error-rates. In this paper, we present Streamline, a flush-less covert-channel attack faster than all prior known attacks. The key insight behind the higher channel bandwidth is asynchronous communication. Streamline communicates over a sequence of shared addresses (larger than the cache size), where the sender can move to the next address after transmitting each bit without waiting for the receiver. Furthermore, it ensures that addresses accessed by the sender are preserved in the cache until the receiver has accessed them. Finally, by the time the sender accesses the entire sequence and wraps around, the cache-thrashing property ensures that the previously transmitted addresses are automatically evicted from the cache without any cacheline flushes, which ensures functional correctness while simultaneously improving channel bandwidth. To orchestrate Streamline on a real system, we overcome multiple challenges, such as circumventing hardware optimizations (prefetching and replacement policy), and ensuring that the sender and receiver have similar execution rates. We demonstrate Streamline on an Intel Skylake CPU and show that it achieves a bit-rate of 1801 KB/s, which is 3x to 3.6x faster than the previous fastest Take-a-Way (588 KB/s) and Flush+Flush (496 KB/s) attacks, at comparable error rates. Unlike prior attacks, Streamline only relies on generic properties of caches and is applicable to processors of all ISAs (x86, ARM, etc.) and micro-architectures (Intel, AMD, etc.).
Ramulator: A Fast and Extensible DRAM Simulator Recently, both industry and academia have proposed many different roadmaps for the future of DRAM. Consequently, there is a growing need for an extensible DRAM simulator, which can be easily modified to judge the merits of today’s DRAM standards as well as those of tomorrow. In this paper, we present Ramulator, a fast and cycle-accurate DRAM simulator that is built from the ground up for extensibility. Unlike existing simulators, Ramulator is based on a generalized template for modeling a DRAM system, which is only later infused with the specific details of a DRAM standard. Thanks to such a decoupled and modular design, Ramulator is able to provide out-of-the-box support for a wide array of DRAM standards: DDR3/4, LPDDR3/4, GDDR5, WIO1/2, HBM, as well as some academic proposals (SALP, AL-DRAM, TLDRAM, RowClone, and SARP). Importantly, Ramulator does not sacrifice simulation speed to gain extensibility: according to our evaluations, Ramulator is 2.5 faster than the next fastest simulator. Ramulator is released under the permissive BSD license.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Polynomial Fuzzy Models for Nonlinear Control: A Taylor Series Approach Classical Takagi-Sugeno (T-S) fuzzy models are formed by convex combinations of linear consequent local models. Such fuzzy models can be obtained from nonlinear first-principle equations by the well-known sector-nonlinearity modeling technique. This paper extends the sector-nonlinearity approach to the polynomial case. This way, generalized polynomial fuzzy models are obtained. The new class of models is polynomial, both in the membership functions and in the consequent models. Importantly, T-S models become a particular case of the proposed technique. Recent possibilities for stability analysis and controller synthesis are also discussed. A set of examples shows that polynomial modeling is able to reduce conservativeness with respect to standard T-S approaches as the degrees of the involved polynomials increase.
An Accurate, Continuous, and Lossless Self-Learning CMOS Current-Sensing Scheme for Inductor-Based DC-DC Converters Sensing current is a fundamental function in power supply circuits, especially as it generally applies to protection and feedback control. Emerging state-of-the-art switching supplies, in fact, are now exploring ways to use this sensed-current information to improve transient response, power efficiency, and compensation performance by appropriately self-adjusting, on the fly, frequency, inductor r...
A 0.1–6.0-GHz Dual-Path SDR Transmitter Supporting Intraband Carrier Aggregation in 65-nm CMOS A 4.8-mm –6.0-GHz dual-path software-defined radio transmitter supporting intraband carrier aggregation (CA) in 65-nm CMOS is presented. A simple approach is proposed to support intraband CA signals with only one I-Q baseband path. By utilizing the power-scalable and feedforward compensation techniques, the power of the wideband analog baseband is minimized. The transmitter consists of a high gain-range main path and a low-power subpath to cooperatively cover different standards over 0.1–6.0 GHz with more flexibility. The reconfigurable power amplifier (PA) driver achieves wideband frequency coverage with efficiency-enhanced on-chip transformers and improved switched-capacitor arrays. This transmitter achieves <−50-dBc image rejection ratio and <−40-dBc local oscillating signal leakage after the calibration. System verifications have demonstrated −31/−51-dBc ACLR1/ACLR2 (adjacent channel leakage ratio) at 3-dBm output power for 2.3-GHz LTE20 in the main path and 1.7% error vector magnitude (EVM) at 1.5-dBm output for 1.8-GHz WCDMA in the subpath. Both paths enable SAW-less FDD operations with −153 or −156 dBc/Hz carrier-to-noise ratio at 200-MHz frequency offset. Finally, the dual CA signals with 55-MHz frequency spacing are verified, showing the EVM of 1.2% and 0.8%, respectively, and exhibiting the intraband CA capability.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.019156
0.016667
0.016667
0.016667
0.016667
0.01
0.006498
0.003333
0.000027
0
0
0
0
0
A FPGA based generalized parametrizable modulator The present paper deals with design and development of a generalized parametrizable modulator (GPM) that can perform Gaussian minimum shift keying (GMSK) modulation and Quadrature Phase Shift Keying (QPSK) modulation in a reconfigurable baseband modulator for software defined radio (SDR) architecture. GMSK is the underlying modulation scheme for the Global System for mobile (GSM) standard, while QPSK technique is the basic modulation scheme for Code Division Multiple Access (CDMA) standard. Although GMSK is an inherently nonlinear modulation technique, the present work uses a linearly approximated GMSK technique to take the advantage of the simplicity of the filter structure. The normalised error in amplitude as a function of modulation index and bandwidth time product has been simulated thus capable of working at a data rate that satisfies the requirements of almost all 2G and 3G air interface standards.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Are Coherence Protocol States Vulnerable to Information Leakage? Most commercial multi-core processors incorporate hardware coherence protocols to support efficient data transfers and updates between their constituent cores. While hardware coherence protocols provide immense benefits for application performance by removing the burden of software-based coherence, we note that understanding the security vulnerabilities posed by such oft-used, widely-adopted processor features is critical for secure processor designs in the future. In this paper, we demonstrate a new vulnerability exposed by cache coherence protocol states. We present novel insights into how adversaries could cleverly manipulate the coherence states on shared cache blocks, and construct covert timing channels to illegitimately communicate secrets to the spy. We demonstrate 6 different practical scenarios for covert timing channel construction. In contrast to prior works, we assume a broader adversary model where the trojan and spy can either exploit explicitly shared read-only physical pages (e.g., shared library code), or use memory deduplication feature to implicitly force create shared physical pages. We demonstrate how adversaries can manipulate combinations of coherence states and data placement in different caches to construct timing channels. We also explore how adversaries could exploit multiple caches and their associated coherence states to improve transmission bandwidth with symbols encoding multiple bits. Our experimental results on commercial systems show that the peak transmission bandwidths of these covert timing channels can vary between 700 to 1100 Kbits/sec. To the best of our knowledge, our study is the first to highlight the vulnerability of hardware cache coherence protocols to timing channels that can help computer architects to craft effective defenses against exploits on such critical processor features.
SpectreRewind: Leaking Secrets to Past Instructions ABSTRACTTransient execution attacks use microarchitectural covert channels to leak secrets that should not have been accessible during logical program execution. Commonly used micro-architectural covert channels are those that leave lasting footprints in the micro-architectural state, for example, a cache state change, from which the secret is recovered after the transient execution is completed. In this paper, we present SpectreRewind, a new approach to create and exploit contention-based covert channels for transient execution attacks. In our approach, a covert channel is established by issuing the necessary instructions logically before the transiently executed victim code. Unlike prior contention based covert channels, which require simultaneous multi-threading (SMT), SpectreRewind supports covert channels based on a single hardware thread, making it viable on systems where the attacker cannot utilize SMT. We show that contention on the floating point division unit on commodity processors can be used to create a high-performance (~100 KB/s), low-noise covert channel for transient execution attacks instead of commonly used flush+reload based cache covert channels. We also show that the proposed covert channel works in the JavaScript sandbox environment of a Chrome browser.
Meltdown: reading kernel memory from user space Lessons learned from Meltdown's exploitation of the weaknesses in today's processors.
TLB index-based tagging for cache energy reduction Conventional cache tag matching is based on addresses to identify correct data in caches. However, this tagging scheme is not efficient because tag bits are unnecessarily large. From our observations, there are not many unique tag bits due to typically small working sets, which are conventionally captured by TLBs. To effectively exploit this fact, we propose TLB index-based cache tagging scheme. This new tagging scheme reduces required number of tag bits to one-fourth of the conventional tagging scheme. The reduced tag bits decrease tag bits array area by 72% and its energy consumption by 58%. From our experiments, our proposed new tagging scheme reduces instruction cache energy consumption by 13% for embedded systems.
Covert Timing Channels Exploiting Cache Coherence Hardware: Characterization and Defense Information leakage of sensitive data has become one of the fast growing concerns among computer users. With adversaries turning to hardware for exploits, caches are frequently a target for timing channels since they present different timing profiles for cache miss and hit latencies. Such timing channels operate by having an adversary covertly communicate secrets to a spy simply through modulating resource timing without leaving any physical evidence. In this article, we demonstrate a new vulnerability exposed by cache coherence protocols where adversaries could manipulate the coherence states on certain cache blocks to alter cache access timing and communicate secrets illegitimately. Our threat model assumes the trojan and spy can either exploit explicitly shared read-only physical pages (e.g., shared library code), or use memory deduplication feature to implicitly force create shared physical pages. We demonstrate a template that adversaries may use to construct covert timing channels through manipulating combinations of coherence states and data placement in different caches. We investigate several classes of cache coherence protocols, and observe that both directory-based and snoopy protocols can be subject to covert timing channel attacks. We identify that the root cause of the vulnerability to be the existence of access latency difference for cache lines in read-only cache coherence states: Exlusive and Shared. For defense, we propose a slightly modified cache coherence scheme that will enable the last level cache to directly respond to read data requests in these read-only coherence states, and avoid any latency difference that could enable timing channels.
unXpec: Breaking Undo-based Safe Speculation Speculative execution attacks exploiting speculative execution to leak secrets have aroused significant concerns in both industry and academia. They mainly exploit covert or side channels over microarchitectural states left by mis-speculated and squashed instructions (i.e., transient instructions). Most such attacks target cache states. Existing cache-based defenses against speculative execution attacks fall into two categories, Invisible and Undo. Most Invisible defenses buffer execution metadata of speculative instructions and place them into the cache only if the speculatively executed instructions become determined. Motivated by the fact that mis-speculations are rare cases, Undo defenses allow speculative instructions to modify cache states. Upon a mis-speculation, they rollback cache states to the ones prior to the execution of transient instructions. However, Invisible defenses have been recently found insecure by the speculative interference attack. This calls for a deep security inspection of Undo defenses against speculative execution attacks.In this paper, we present unXpec as the first attack against Undo-based safe speculation. It exploits the secret-dependent timing channel exhibited through the rollback operations of Undo defenses. Specifically, the rollback process requires both invalidating cache lines brought into the cache by transient instructions and restoring evicted cache lines from the cache by transiently loaded data. This opens up a channel that encodes secret via the timing difference between when rollback involves much invalidation and restoration or not. We further leverage eviction sets to enforce more restoration operations. This yields a longer rollback time and thus a larger secret-dependent timing difference. We demonstrate the timing channel over the open-source CleanupSpec, a representative Undo solution. A single transient load can trigger a secret-dependent timing difference of 22 cycles (without eviction sets) of 32 cycles (with eviction sets), which is sufficiently exploitable for constructing a covert channel for speculative execution attacks. We run unXpec on the gem5 simulator with CleanupSpec enabled. The results show that unXpec can leak secrets at a high rate of 140 Kbps with an accuracy over 90%. Simply enforcing constant-time rollback to mitigate unXpec may induce an over 70% performance overhead.
Survey of Transient Execution Attacks and Their Mitigations AbstractTransient execution attacks, also known as speculative execution attacks, have drawn much interest in the last few years as they can cause critical data leakage. Since the first disclosure of Spectre and Meltdown attacks in January 2018, a number of new transient execution attack types have been demonstrated targeting different processors. A transient execution attack consists of two main components: transient execution itself and a covert channel that is used to actually exfiltrate the information.Transient execution is a result of the fundamental features of modern processors that are designed to boost performance and efficiency, while covert channels are unintended information leakage channels that result from temporal and spatial sharing of the micro-architectural components. Given the severity of the transient execution attacks, they have motivated computer architects in both industry and academia to rethink the design of the processors and to propose hardware defenses. To help understand the transient execution attacks, this survey summarizes the phases of the attacks and the security boundaries across which the information is leaked in different attacks.This survey further analyzes the causes of transient execution as well as the different types of covert channels and presents a taxonomy of the attacks based on the causes and types. This survey in addition presents metrics for comparing different aspects of the transient execution attacks and uses them to evaluate the feasibility of the different attacks. This survey especially considers both existing attacks and potential new attacks suggested by our analysis. This survey finishes by discussing different mitigations that have so far been proposed at the micro-architecture level and discusses their benefits and limitations.
Identifying and Filtering Near-Duplicate Documents The mathematical concept of document resemblance captures well the informal notion of syntactic similarity. The resemblance can be estimated using a fixed size "sketch" for each document. For a large collection of documents (say hundreds of millions) the size of this sketch is of the order of a few hundred bytes per document. However, for effcient large scale web indexing it is not necessary to determine the actual resemblance value: it suffces to determine whether newly encountered documents are duplicates or near-duplicates of documents already indexed. In other words, it suffces to determine whether the resemblance is above a certain threshold. In this talk we show how this determination can be made using a "sample" of less than 50 bytes per document. The basic approach for computing resemblance has two aspects: first, resemblance is expressed as a set (of strings) intersection problem, and second, the relative size of intersections is evaluated by a process of random sampling that can be done independently for each document. The process of estimating the relative size of intersection of sets and the threshold test discussed above can be applied to arbitrary sets, and thus might be of independent interest. The algorithm for filtering near-duplicate documents discussed here has been successfully implemented and has been used for the last three years in the context of the AltaVista search engine.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Scalable video coding and transport over broadband wireless networks With the emergence of broadband wireless networks and increasing demand of multimedia information on the Internet, wireless multimedia services are foreseen to become widely deployed in the next decade. Real-time video transmission typically has requirements on quality of service (QoS). However, wireless channels are unreliable and the channel bandwidth varies with time, which may cause severe deg...
Noise in current-commutating passive FET mixers Noise in the mixer of zero-IF receivers can compromise the overall receiver sensitivity. The evolution of a passive CMOS mixer based on the knowledge of the physical mechanisms of noise in an active mixer is explained. Qualitative physical models that simply explain the frequency translation of both the flicker and white noise of different FETs in the mixer have been developed. Derived equations have been verified by simulations, and mixer optimization has been explained.
A 40 Gb/s CMOS Serial-Link Receiver With Adaptive Equalization and Clock/Data Recovery This paper presents a 40 Gb/s serial-link receiver including an adaptive equalizer and a CDR circuit. A parallel-path equalizing filter is used to compensate the high-frequency loss in copper cables. The adaptation is performed by only varying the gain in the high-pass path, which allows a single loop for proper control and completely removes the RC filters used for separately extracting the high-...
Armature Reaction Field and Inductance of Coreless Moving-Coil Tubular Linear Machine Analysis of armature reaction field and inductance is extremely important for design and control implementation of electromagnetic machines. So far, most studies have focused on magnetic field generated by permanent-magnet (PM) poles, whereas less work has been done on armature reaction field. This paper proposes a novel analytical modeling method to predict the armature reaction field of a coreless PM tubular linear machine with dual Halbach array. Unlike conventional modeling approach, the proposed method formulates the armature reaction field for electromagnetic machines with finite length, so that the analytical modeling precision can be improved. In addition, winding inductance is also analytically formulated to facilitate dynamic motion control based on the reaction field solutions. Numerical result is subsequently obtained with finite-element method and employed to validate the derived analytical models. A research prototype with dual Halbach array and single phase input is developed. Experiments are conducted on the reaction field and inductance to further verify the obtained mathematical models.
Walsh-Hadamard-Based Orthogonal Sampling Technique for Parallel Neural Recording Systems Walsh-Hadamard based orthogonal sampling of signals is studied in this paper, and an application of this technique is presented. Using orthogonal sampling, a single analog-to-digital converter (ADC) only is sufficient to perform parallel (simultaneous) recording from the sensors. Furthermore, employing Walsh functions as modulation signals, the required bandwidth of the ADC in the proposed system is equal to the bandwidth of a time-multiplexed ADC in a system with identical number of recording channels. Therefore, the bandwidth of the ADC in the proposed system is effectively employed and shared among all the channels. The efficient usage of the ADC bandwidth leads to saving power at the ADC stage and reducing the datarate of the output signal compared to state-of-the-art recording systems based on frequency-division multiplexing. This paper presents the orthogonal sampling technique for neural recording in multi-channel recording systems which is implemented with four recording channels using a 0.18 μm technology which results in a power consumption of 1.26 μW/channel at a 0.8 V supply.
1.040462
0.048
0.040889
0.033333
0.033333
0.033333
0.024
0.0074
0
0
0
0
0
0
A Monte Carlo Simulation Approach to Evaluate Service Capacities of EV Charging and Battery Swapping Stations. With the rapid growth of electric vehicle (EV) ownership, attentions have been paid to the foundation of EVs, the electric vehicle supply equipment (EVSE). Different approaches of effort, among which battery swapping and fast charging are the two most well studied, have been made to solve the tradeoff problem between the battery charging speed and battery lifetime. There has been considerable deba...
Dynamic Pricing for Electric Vehicle Extreme Fast Charging Significant developments and advancement pertaining to electric vehicle (EV) technologies, such as extreme fast charging (XFC), have been witnessed in the last decade. However, there are still many challenges to the wider deployment of EVs. One of the major barriers is its availability of fast charging stations. A possible solution is to build a fast charging sharing system, by encouraging small business owners or even householders to install and share their fast charging devices, by reselling electricity energy sourced from traditional utility companies or their own solar grid. To incentivize such a system, a smart dynamic pricing scheme is needed to facilitate those growing markets with fast charging stations. The pricing scheme is expected to take into account the dynamics intertwined with pricing, demand, and environment factors, in an effort to maximize the long-term profit with the optimal price. To this end, this paper formulates the problem of dynamic pricing for fast charging as a Markov decision process and accordingly proposes several algorithmic schemes for different applications. Experimental study is conducted with useful and interesting insights.
Enabling Extreme Fast Charging Technology for Electric Vehicles As a significant part of the next-generation smart grid, electric vehicles (EVs) are essential for most countries to achieve energy independence, secure energy supply, and alleviate the pressure on environmental protection and energy security. Although EVs have grown rapidly, the slow recharge time is still the biggest obstacle to a wider application. While gasoline vehicles can pump enough gasoline in less than ten minutes, which can carry themselves a few hundred miles. However, most of today’s fast-charging techniques take half an hour only to provide very limited miles of electric driving range.
Blockchain-Based Electric Vehicle Incentive System for Renewable Energy Consumption The rising proportion of renewable energy (RE) penetration with high variability introduces immense pressure on the stability of power grids. At the same time, a rapid increase in electric vehicle (EV) penetration level leads to uncoordinated charging loads, which poses significant challenges to operators. By properly guiding and scheduling the charging behaviors, EV may no longer be a burden, but a valuable asset to mitigate the RE integration problem. In this brief, we first propose a prioritization ranking algorithm of EV drivers based on their driving and charging behaviors, and then we propose a blockchain-based EV incentive system to maximize the utilization of RE. The proposed system is secure, anonymous, and decentralized. By incorporating the utilities, EV drivers, EV charging service providers, and RE providers into the proposed incentive system, this brief provides a plan to guide the EV users to charge at the desired time frames with higher RE generation. The market mechanism of the incentive system is discussed. The effectiveness of the system is verified by simulation.
Electric Vehicles with a Battery Switching Station: Adoption and Environmental Impact The transportation sector's carbon footprint and dependence on oil are of deep concern to policy makers in many countries. Use of all-electric drive trains is arguably the most realistic medium-term solution to address these concerns. However, motorist anxiety induced by an electric vehicle's limited range and high battery cost have constrained consumer adoption. A novel switching-station-based solution is touted as a promising remedy. Vehicles use standardized batteries that, when depleted, can be switched for fully charged batteries at switching stations, and motorists only pay for battery use. We build a model that highlights the key mechanisms driving adoption and use of electric vehicles in this new switching-station-based electric vehicle system and contrast it with conventional electric vehicles. Our model employs results from repairable item inventory theory to capture switching-station operation; we embed this model in a behavioral model of motorist use and adoption. Switching-station systems effectively transfer range risk from motorists to the station operator, who, through statistical economies of scale, can better manage it. We find that this transfer of risk can lead to higher electric vehicle adoption than in a conventional system, but it also encourages more driving than a conventional system does. We calibrate our models with motorist behavior data, electric vehicle technology data, operation costs, and emissions data to estimate the relative effectiveness of the two systems under the status quo and other plausible future scenarios. We find that the system that is more effective at reducing emissions is often less effective at reducing oil dependence, and the misalignment between the two objectives is most severe when the energy mix is coal heavy and has advanced battery technology. Increases in gasoline prices by imposition of taxes, for instance are much more effective in reducing carbon emissions, whereas battery-price-reducing policy interventions are more effective for reducing oil dependence. Taken together, our results help a policy maker identify the superior system for achieving the desired objectives. They also highlight that policy makers should not conflate the dual objectives of oil dependence and emissions reductions as the preferred system, and the policy interventions that further that system may be different for the two objectives.This paper was accepted by Yossi Aviv, operations management.
Optimal battery purchasing and charging strategy at electric vehicle battery swap stations. •We formulate a battery purchasing and charging problem for battery swap.•We use a dynamic model to capture the time-varying energy price and demand.•A fluid approach is used to address the curse of dimensionality of the model.•Robust optimization is applied to examine the impact of demand uncertainty.•We investigate the impact of energy price and demand patterns on system cost.
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
Measurement issues in galvanic intrabody communication: influence of experimental setup Significance: The need for increasingly energyefficient and miniaturized bio-devices for ubiquitous health monitoring has paved the way for considerable advances in the investigation of techniques such as intrabody communication (IBC), which uses human tissues as a transmission medium. However, IBC still poses technical challenges regarding the measurement of the actual gain through the human body. The heterogeneity of experimental setups and conditions used together with the inherent uncertainty caused by the human body make the measurement process even more difficult. Goal: The objective of this work, focused on galvanic coupling IBC, is to study the influence of different measurement equipments and conditions on the IBC channel. Methods: different experimental setups have been proposed in order to analyze key issues such as grounding, load resistance, type of measurement device and effect of cables. In order to avoid the uncertainty caused by the human body, an IBC electric circuit phantom mimicking both human bioimpedance and gain has been designed. Given the low-frequency operation of galvanic coupling, a frequency range between 10 kHz and 1 MHz has been selected. Results: the correspondence between simulated and experimental results obtained with the electric phantom have allowed us to discriminate the effects caused by the measurement equipment. Conclusion: this study has helped us obtain useful considerations about optimal setups for galvanic-type IBC as well as to identify some of the main causes of discrepancy in the literature.
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Codejail: Application-Transparent Isolation of Libraries with Tight Program Interactions.
Distributed Primal-Dual Subgradient Method for Multiagent Optimization via Consensus Algorithms. This paper studies the problem of optimizing the sum of multiple agents' local convex objective functions, subject to global convex inequality constraints and a convex state constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the Lagrangian function associated with the problem, we propose a distributed algorithm, named the distributed primal-dual subgradient method, to provide approximate saddle points of the Lagrangian function, based on the distributed average consensus algorithms. Under Slater's condition, we obtain bounds on the convergence properties of the proposed method for a constant step size. Simulation examples are provided to demonstrate the effectiveness of the proposed method.
Implementation of LTE SC-FDMA on the USRP2 software defined radio platform In this paper we discuss the implementation of a Single Carrier Frequency Division Multiple Access (SC-FDMA) transceiver running over the Universal Software Radio Peripheral 2 (USRP2). SC-FDMA is the air interface which has been selected for the uplink in the latest Long Term Evolution (LTE) standard. In this paper we derive an AWGN channel model for SC-FDMA transmission, which is useful for benchmarking experimental results. In our implementation, we deal with signal scaling, equalization and partial synchronization to realize SC-FDMA transmission over a noisy channel at rates up to 5.184 Mbit/s. Experimental results on the Bit Error Rate (BER) versus Signal-to-Noise Ratio (SNR) are presented and compared to theoretical and simulated performance.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.071111
0.066667
0.066667
0.066667
0.033333
0.022222
0
0
0
0
0
0
0
0
A Millimeter-Wave CMOS VCO Featuring a Mode-Ambiguity-Aware Multi-Resonant-RLCM Tank This paper presents a millimeter-wave NMOS-PMOS-complementary (CMOS) VCO with a multi-resonant Resistor-Inductor-Capacitor-Mutual Inductance (RLCM) tank. It features an 8-port multi-tap inductor with the switched-capacitor arrays to generate and align the 1 $^{{\text {st}}}$ , 2 <tex-math notati...
A 3.3-mW 25.2-to-29.4-GHz Current-Reuse VCO Using a Single-Turn Multi-Tap Inductor and Differential-Only Switched-Capacitor Arrays With a 187.6-dBc/Hz FOM A millimeter-wave current-reuse voltage-controlled oscillator (VCO) features a single-turn multi-tap inductor and two separate differential-only switched-capacitor arrays to improve the power efficiency and phase noise (PN). Specifically, a single-branch complementary VCO topology, in conjunction with a multi-resonant Resistor-Inductor-Capacitor-Mutual inductance (RLCM) tank, allows sharing the bias current and reshaping the impulse-sensitivity-function. The latter is based on an area- efficient RLCM tank to concurrently generate two high quality- factor differential-mode resonances at the fundamental and 2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">nd</sup> - harmonic oscillation frequencies. Fabricated in 65-nm CMOS technology, our VCO at 27.7 GHz shows a PN of -109.91-dBc/Hz at 1-MHz offset (after on-chip divider-by-2), while consuming just 3.3 mW at a 1.1-V supply. It corresponds to a Figure-of-Merit (FOM) of 187.6 dBc/Hz. The frequency tuning range is 15.3% (25.2 to 29.4 GHz) and the core area is 0.116 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> .
Bird'S-Eye View Of Analog And Mixed-Signal Chips For The 21st Century The Internet of Everything (IoE), clearly a 21st century's technology, brilliantly plays with digital data obtained from analog sources, bringing together two different realities, the analog (physical/real), and the digital (cyber/virtual) worlds. Then, with the boundaries of IoE still analog in nature, the required functions at the interface involve sensing, measuring, filtering, converting, processing, and connecting, which imply that the analog layer governs the entire system in terms of accuracy and precision. Furthermore, such interface integrates several analog and mixed-signal subsystems that comprise mainly signal transmission and reception, frequency generation, energy harvesting, data, and power conversion. This paper sets forth a state-of-the-art design perspective of some of the most critical building blocks used in the analog/digital interface, covering wireless cellular transceivers, millimeter-wave frequency generators, energy harvesting interfaces, plus, data and power converters, that exhibit high quality performance achieved through low-power consumption, high energy-efficiency, and high speed.
0.6–2.7-Gb/s Referenceless Parallel CDR With a Stochastic Dispersion-Tolerant Frequency Acquisition Technique A 0.6-2.7-Gb/s phase-rotator-based four-channel digital clock and data recovery (CDR) IC featuring a low-power dispersion-tolerant referenceless frequency acquisition technique is presented. A quasi-periodic reference clock signal extracted directly from a dispersed input signal is distributed to digitally controlled phase rotators in the CDR ICs for phase acquisition. A multiphase frequency acquisition scheme is employed for the reduction of the clock jitter. The measurement results show that the proposed design offers a lower frequency offset and clock noise floor under channel dispersion, as compared with conventional designs. The proposed four-channel digital CDR IC is fabricated in a 90-nm CMOS process. The figure of merit for a single channel is 8 mW/Gb/s such as a feedforward equalizer, a decision-feedback equalizer, and a referenceless CDR.
On-Chip Jitter Measurement Using Jitter Injection in a 28 Gb/s PI-Based CDR. We present a technique to measure random jitter in a phase interpolator (PI)-based clock and data recovery (CDR) circuit by injecting a controlled amount of square-wave jitter into its edge clock and monitoring its effect on the autocorrelation function of the CDR&#39;s bang-bang phase detector output. Jitter is injected by adjusting the code of the edge PI while the autocorrelation function is measur...
Implicit Common-Mode Resonance in LC Oscillators. The performance of a differential LC oscillator can be enhanced by resonating the common mode of the circuit at twice the oscillation frequency. When this technique is correctly employed, Q-degradation due to the triode operation of the differential pair is eliminated and flicker noise is nulled. Until recently, one or more tail inductors have been used to achieve this common-mode resonance. In th...
A 5-Gb/s ADC-Based Feed-Forward CDR in 65 nm CMOS This paper presents an ADC-based CDR that blindly samples the received signal at twice the data rate and uses these samples to directly estimate the locations of zero crossings for the purpose of clock and data recovery. We successfully confirmed the operation of the proposed CDR architecture at 5 Gb/s. The receiver is implemented in 65 nm CMOS, occupies 0.51 mm(2) and consumes 178.4 mW at 5 Gb/s.
An Oversampling SAR ADC With DAC Mismatch Error Shaping Achieving 105 dB SFDR and 101 dB SNDR Over 1 kHz BW in 55 nm CMOS. The successive-approximation-register (SAR) architecture is well known for its high power efficiency in medium-resolution analog-to-digital converters (ADCs). However, when considered for high-precision applications, SAR ADCs suffer from non-linearity resulting from capacitor mismatch and limited dynamic range due to comparator noise. This work presents a mismatch error shaping (MES) technique for...
Planning as heuristic search In the AIPS98 Planning Contest, the hsp planner showed that heuristic search planners can be competitive with state-of-the-art Graphplan and sat planners. Heuristic search planners like hsp transform planning problems into problems of heuristic search by automatically extracting heuristics from Strips encodings. They differ from specialized problem solvers such as those developed for the 24-Puzzle and Rubik's Cube in that they use a general declarative language for stating problems and a general mechanism for extracting heuristics from these representations. In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are independent. The heuristic is then used in the context of best-first and hill-climbing search algorithms, and is tested over a large collection of domains. We then consider variations and extensions such as reversing the direction of the search for speeding node evaluation, and extracting information about propositional invariants for avoiding dead-ends. We analyze the resulting planners, evaluate their performance, and explain when they do best. We also compare the performance of these planners with two state-of-the-art planners, and show that the simplest planner based on a pure best-first search yields the most solid performance over a large set of problems. We also discuss the strengths and limitations of this approach, establish a correspondence between heuristic search planning and Graphplan, and briefly survey recent ideas that can reduce the current gap in performance between general heuristic search planners and specialized solvers.
The Transitive Reduction of a Directed Graph
Mdvm System Concept, Paging Latency And Round-2 Randomized Leader Election Algorithm In Sg The future trend in the computing paradigm is marked by mobile computing based on mobile-client/server architecture connected by wireless communication network. However, the mobile computing systems have limitations because of the resource-thin mobile clients operating on battery power. The MDVM system allows the mobile clients to utilize memory and CPU resources of Server-Groups (SG) to overcome the resource limitations of clients in order to support the high-end mobile applications such as, m-commerce and virtual organization (VO). In this paper the concept ofMDVM system and the architecture of cellular network containing the SG are discussed. A round-2 randomized distributed algorithm is proposed to elect a unique leader and co-leader of the SG. The algorithm is free from any assumption about network topology, buffer space limitations and is based on dynamically elected coordinators eliminating single point of failure. The algorithm is implemented in distributed system setup and the network-paging latency values of wired and wireless networks are measured experimentally. The experimental results demonstrate that in most cases the algorithm successfully terminates in first round and the possibility of second round execution decreases significantly with the increase in the size of SG (vertical bar N-a vertical bar). The overall message complexity of the algorithm is O(vertical bar N-a vertical bar). The comparative study of network-paging latencies indicates that 3G/4G mobile communication systems would support the realization of MDVM system.
A 13-b 40-MSamples/s CMOS pipelined folding ADC with background offset trimming Two key concepts of pipelining and background offset trimming are applied to demonstrate a 13-b 40-MSamples/s CMOS analog-to-digital converter (ADC) based on the basic folding and interpolation architecture. Folding amplifier stages made of simple differential pairs are pipelined using distributed interstage track-and-holders. Background offset trimming implemented with a highly oversampling delta-sigma modulator enhances the resolution of the CMOS folders beyond 12 bits. The background offset trimming circuit continuously measures and adjusts the offsets of the folding amplifiers without interfering with the normal operation. The prototype system is further refined using subranging and digital correction, and exhibits a spurious-free dynamic range (SFDR) of 82 dB at 40 MSamples/s. The measured differential nonlinearity (DNL) and integral nonlinearity (INL) are about /spl plusmn/0.5 and /spl plusmn/2.0 LSB, respectively. The chip fabricated in 0.5-/spl mu/m CMOS occupies 8.7 mm/sup 2/ and consumes 800 mW at 5 V.
Electromagnetic regenerative suspension system for ground vehicles This paper considers an electromagnetic regenerative suspension system (ERSS) that recovers the kinetic energy originated from vehicle vibration, which is previously dissipated in traditional shock absorbers. It can also be used as a controllable damper that can improve the vehicle's ride and handling performance. The proposed electromagnetic regenerative shock absorbers (ERSAs) utilize a linear or a rotational electromagnetic generator to convert the kinetic energy from suspension vibration into electricity, which can be used to reduce the load on the alternator so as to improve fuel efficiency. A complete ERSS is discussed here that includes the regenerative shock absorber, the power electronics for power regulation and suspension control, and an electronic control unit (ECU). Different shock absorber designs are proposed and compared for simplicity, efficiency, energy density, and controlled suspension performances. Both simulation and experiment results are presented and discussed.
Walsh-Hadamard-Based Orthogonal Sampling Technique for Parallel Neural Recording Systems Walsh-Hadamard based orthogonal sampling of signals is studied in this paper, and an application of this technique is presented. Using orthogonal sampling, a single analog-to-digital converter (ADC) only is sufficient to perform parallel (simultaneous) recording from the sensors. Furthermore, employing Walsh functions as modulation signals, the required bandwidth of the ADC in the proposed system is equal to the bandwidth of a time-multiplexed ADC in a system with identical number of recording channels. Therefore, the bandwidth of the ADC in the proposed system is effectively employed and shared among all the channels. The efficient usage of the ADC bandwidth leads to saving power at the ADC stage and reducing the datarate of the output signal compared to state-of-the-art recording systems based on frequency-division multiplexing. This paper presents the orthogonal sampling technique for neural recording in multi-channel recording systems which is implemented with four recording channels using a 0.18 μm technology which results in a power consumption of 1.26 μW/channel at a 0.8 V supply.
1.11
0.11
0.11
0.1
0.1
0.06
0.025
0.003333
0
0
0
0
0
0
State Estimation for Quaternion-Valued Neural Networks With Multiple Time Delays This paper addresses the issue of state estimation for the quaternion-valued neural networks (QVNNs) with leakage, discrete, and distributed delays by employing the Lyapunov stability theory and the quaternion matrix theory. The criteria are developed in two forms of quaternion-valued linear matrix inequalities (LMIs) and complex-valued LMIs for guaranteeing the existence and stability of state estimators of the delayed QVNNs. Two numerical examples are provided to illustrate the effectiveness of the obtained results.
Finite-time stabilization by state feedback control for a class of time-varying nonlinear systems. In this paper, finite-time stabilization is considered for a class of nonlinear systems dominated by a lower-triangular model with a time-varying gain. Based on the finite-time Lyapunov stability theorem and dynamic gain control design approach, state feedback finite-time stabilization controllers are proposed with gains being tuned online by two dynamic equations. Different from many existing finite-time control designs for lower-triangular nonlinear systems, the celebrated backstepping method is not utilized here. It is observed that our design procedure is much simpler, and the resulting control gains are in general not as high as those provided by the backstepping method. A simulation example is given to demonstrate the effectiveness of the proposed design procedure.
Robust stability of hopfield delayed neural networks via an augmented L-K functional. This paper focuses on the issue of robust stability of artificial delayed neural networks. A free-matrix-based inequality strategy is produced by presenting an arrangement of slack variables, which can be optimized by means of existing convex optimization algorithms. To reflect a large portion of the dynamical behaviors of the framework, uncertain parameters are considered. By constructing an augmented Lyapunov functional, sufficient conditions are derived to guarantee that the considered neural systems are completely stable. The conditions are presented in the form of as linear matrix inequalities (LMIs). Finally, numerical cases are given to show the suitability of the results presented.
Finite-time stabilization for a class of nonlinear systems via optimal control. In general, finite-time stabilization techniques can always stabilize a system if control cost is not considered. Considering the fact that control cost is a very important factor in control area, we investigate finite-time stabilization problem for a class of nonlinear systems in this paper, where the control cost can also be reduced. We formulate this problem into an optimal control problem, where the control functions are optimized such that the system can be stabilized with minimum control cost. Then, the control parameterization enhancing transform and the control parameterization method are applied to solve this problem. Two numerical examples are illustrated to show the effectiveness of the proposed method.
A Unified Framework Design for Finite-Time and Fixed-Time Synchronization of Discontinuous Neural Networks. In this article, the problems of finite-time/fixed-time synchronization have been investigated for discontinuous neural networks in the unified framework. To achieve the finite-time/fixed-time synchronization, a novel unified integral sliding-mode manifold is introduced, and corresponding unified control strategies are provided; some criteria are established for selecting suitable parameters for s...
Existence and uniform stability analysis of fractional-order complex-valued neural networks with time delays. This paper deals with the problem of existence and uniform stability analysis of fractional-order complex-valued neural networks with constant time delays. Complex-valued recurrent neural networks is an extension of real-valued recurrent neural networks that includes complex-valued states, connection weights, or activation functions. This paper explains sufficient condition for the existence and uniform stability analysis of such networks. Three numerical simulations are delineated to substantiate the effectiveness of the theoretical results.
Finite-time synchronization of nonidentical BAM discontinuous fuzzy neural networks with delays and impulsive effects via non-chattering quantized control •Two new inequalities are developed to deal with the mismatched coefficients of the fuzzy part.•A simple but robust quantized state feedback controller is designed to overcome the effects of discontinuous activations, time delay, and nonidentical coefficients simultaneously. The designed control schemes do not utilize the sign function and can save channel resources. Moreover, novel non-chattering quantized adaptive controllers are also considered to reduce the control cost.•By utilizing 1-norm analytical technique and comparison system method, the effect of impulses on the FTS is well coped with.•Without utilizing the finite-time stability theorem in [16], several FTS criteria are obtained. Moreover, the settling time is explicitly estimated. Results of this paper can easily be extended to FTS of other classical delayed impulsive NNs with or without nonidentical coefficients.
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
The Influence of the Sigmoid Function Parameters on the Speed of Backpropagation Learning Sigmoid function is the most commonly known function used in feed forward neural networks because of its nonlinearity and the computational simplicity of its derivative. In this paper we discuss a variant sigmoid function with three parameters that denote the dynamic range, symmetry and slope of the function respectively. We illustrate how these parameters influence the speed of backpropagation learning and introduce a hybrid sigmoidal network with different parameter configuration in different layers. By regulating and modifying the sigmoid function parameter configuration in different layers the error signal problem, oscillation problem and asymmetrical input problem can be reduced. To compare the learning capabilities and the learning rate of the hybrid sigmoidal networks with the conventional networks we have tested the two-spirals benchmark that is known to be a very difficult task for backpropagation and their relatives.
Timing Recovery in Digital Synchronous Data Receivers A new class of fast-converging timing recovery methods for synchronous digital data receivers is investigated. Starting with a worst-case timing offset, convergence with random binary data will typically occur within 10-20 symbols. The input signal is sampled at the baud rate; these samples are then processed to derive a suitable control signal to adjust the timing phase. A general method is outlined to obtain near-minimum-variance estimates of the timing offset with respect to a given steady-state sampling criterion. Although we make certain independence assumptions between successive samples and postulate ideal decisions to obtain convenient analytical results, our simulations with a decision-directed reference and baud-to-baud adjustments yield very similar results. Convergence is exponential, and for small loop gains the residual jitter is proportional and convergence time is inversely proportional to the loop gain. The proposed algorithms are simple and economic to implement. They apply to binary or multilevel PAM signals as well as to partial response signals.
Implementing aggregation and broadcast over Distributed Hash Tables Peer-to-peer (P2P) networks represent an effective way to share information, since there are no central points of failure or bottleneck. However, the flip side to the distributive nature of P2P networks is that it is not trivial to aggregate and broadcast global information efficiently. We believe that this aggregation/broadcast functionality is a fundamental service that should be layered over existing Distributed Hash Tables (DHTs), and in this work, we design a novel algorithm for this purpose. Specifically, we build an aggregation/broadcast tree in a bottom-up fashion by mapping nodes to their parents in the tree with a parent function. The particular parent function family we propose allows the efficient construction of multiple interior-node-disjoint trees, thus preventing single points of failure in tree structures. In this way, we provide DHTs with an ability to collect and disseminate information efficiently on a global scale. Simulation results demonstrate that our algorithm is efficient and robust.
A 112 Mb/s Full Duplex Remotely-Powered Impulse-UWB RFID Transceiver for Wireless NV-Memory Applications. A dual band symmetrical UWB-RFID transceiver for high capacity wireless NV-Memory applications is reported. The circuit exhibits a figure of merit of 58 pJ/b and 48 pJ/b in Tx and Rx respectively, with a 112.5 Mb/s data rate capability. It operates in the 7.9 GHz UWB frequency band for full duplex communication and is remotely powered through a UHF CW signal. The circuit has been implemented in a ...
A Single-Inductor 0.35 µm CMOS Energy-Investing Piezoelectric Harvester Although miniaturized piezoelectric transducers usually derive more power from motion than their electrostatic and electromagnetic counterparts, they still generate little power. The reason for this is that the electromechanical coupling factor is low, which means the damping force that tiny transducers impose on vibrations (when drawing power) is hardly noticeable. The single-inductor 0.35 μm CMOS piezoelectric harvester proposed in this paper counters this deficiency by investing energy from the battery into the transducer. The idea is to strengthen the electrostatic force against which vibrations work. This way, the circuit draws more power from the transducer, up to 79 μW from a 2.7 cm piezoelectric cantilever that is driven up to 0.25 m/s 2 . Of the 79 μW drawn at 0.25 m/s 2 when investing 91 nJ of battery energy, the system outputs 52 μW, which is 3.6 times more output power than the 14.5 μW that a full-wave bridge rectifier with zero-volt diodes at its maximum power point can deliver from the same source. With 630 nW lost to the controller, power-conversion efficiency peaks at 69% when the harvester outputs 46 μW of the 67 μW it draws from the transducer at 0.25 m/s 2 when investing 0.8 nJ of battery energy.
Neuropixels Data-Acquisition System: A Scalable Platform for Parallel Recording of 10,000+ Electrophysiological Signals. Although CMOS fabrication has enabled a quick evolution in the design of high-density neural probes and neural-recording chips, the scaling and miniaturization of the complete data-acquisition systems has happened at a slower pace. This is mainly due to the complexity and the many requirements that change depending on the specific experimental settings. In essence, the fundamental challenge of a n...
1.11
0.1
0.1
0.1
0.1
0.1
0.06
0
0
0
0
0
0
0
Application of improved firefly algorithm for programmed PWM in multilevel inverter with adjustable DC sources. •Improved firefly algorithm is applied to determine the optimum switching angles for the 11-level cascaded H bridge multilevel inverter with non-equal DC sources.•Firefly algorithm takes least estimation time and surpasses all other 11 metaheuristic algorithms.•The algorithm and the model are developed using MATLAB and the validity of the simulation is confirmed by an experimental setup using FPGA Spartan 6A DSP.•Results are compared with the results obtained using Particle swarm optimization and artificial bee colony algorithm and it is proved that the proposed method offers reduced THD with less computation period.
Multiobjective evolutionary algorithms: A survey of the state of the art A multiobjective optimization problem involves several conflicting objectives and has a set of Pareto optimal solutions. By evolving a population of solutions, multiobjective evolutionary algorithms (MOEAs) are able to approximate the Pareto optimal set in a single run. MOEAs have attracted a lot of research effort during the last 20 years, and they are still one of the hottest research areas in the field of evolutionary computation. This paper surveys the development of MOEAs primarily during the last eight years. It covers algorithmic frameworks such as decomposition-based MOEAs (MOEA/Ds), memetic MOEAs, coevolutionary MOEAs, selection and offspring reproduction operators, MOEAs with specific search methods, MOEAs for multimodal problems, constraint handling and MOEAs, computationally expensive multiobjective optimization problems (MOPs), dynamic MOPs, noisy MOPs, combinatorial and discrete MOPs, benchmark problems, performance indicators, and applications. In addition, some future research issues are also presented.
Optimal Tracking Control of Motion Systems Tracking control of motion systems typically requires accurate nonlinear friction models, especially at low speeds, and integral action. However, building accurate nonlinear friction models is time consuming, friction characteristics dramatically change over time, and special care must be taken to avoid windup in a controller employing integral action. In this paper a new approach is proposed for the optimal tracking control of motion systems with significant disturbances, parameter variations, and unmodeled dynamics. The ‘desired’ control signal that will keep the nominal system on the desired trajectory is calculated based on the known system dynamics and is utilized in a performance index to design an optimal controller. However, in the presence of disturbances, parameter variations, and unmodeled dynamics, the desired control signal must be adjusted. This is accomplished by using neural network based observers to identify these quantities, and update the control signal on-line. This formulation allows for excellent motion tracking without the need for the addition of an integral state. The system stability is analyzed and Lyapunov based weight update rules are applied to the neural networks to guarantee the boundedness of the tracking error, disturbance estimation error, and neural network weight errors. Experiments are conducted on the linear axes of a mini CNC machine for the contour control of two orthogonal axes, and the results demonstrate the excellent performance of the proposed methodology.
Adaptive tracking control of leader-follower systems with unknown dynamics and partial measurements. In this paper, a decentralized adaptive tracking control is developed for a second-order leader–follower system with unknown dynamics and relative position measurements. Linearly parameterized models are used to describe the unknown dynamics of a self-active leader and all followers. A new distributed system is obtained by using the relative position and velocity measurements as the state variables. By only using the relative position measurements, a dynamic output–feedback tracking control together with decentralized adaptive laws is designed for each follower. At the same time, the stability of the tracking error system and the parameter convergence are analyzed with the help of a common Lyapunov function method. Some simulation results are presented to validate the proposed adaptive tracking control.
Plug-and-Play Decentralized Model Predictive Control for Linear Systems In this technical note, we consider a linear system structured into physically coupled subsystems and propose a decentralized control scheme capable to guarantee asymptotic stability and satisfaction of constraints on system inputs and states. The design procedure is totally decentralized, since the synthesis of a local controller uses only information on a subsystem and its neighbors, i.e. subsystems coupled to it. We show how to automatize the design of local controllers so that it can be carried out in parallel by smart actuators equipped with computational resources and capable to exchange information with neighboring subsystems. In particular, local controllers exploit tube-based Model Predictive Control (MPC) in order to guarantee robustness with respect to physical coupling among subsystems. Finally, an application of the proposed control design procedure to frequency control in power networks is presented.
Event-Based Leader-following Consensus of Multi-Agent Systems with Input Time Delay The event-based control strategy is an effective methodology for tackling the distributed control of multi-agent systems with limited on-board resources. This technical note focuses on event-based leader-following consensus for multi-agent systems described by general linear models and subject to input time delay between controller and actuator. For each agent, the controller updates are event-based and only triggered at its own event times. A necessary condition and two sufficient conditions on leader-following consensus are presented, respectively. It is shown that continuous communication between neighboring agents can be avoided and the Zeno-behavior of triggering time sequences is excluded. A numerical example is presented to illustrate the effectiveness of the obtained theoretical results.
Building Temperature Control Based on Population Dynamics Temperature control in buildings is a dynamic resource allocation problem, which can be approached using nonlinear methods based on population dynamics (i.e., replicator dynamics). A mathematical model of the proposed control technique is shown, including a stability analysis using passivity concepts for an interconnection of a linear multivariable plant driven by a nonlinear control system. In order to illustrate our control strategy, some simulations are performed, and we compare our proposed technique with other control strategies in a model with a fixed structure. Finally, experimental results are shown in order to observe the performance of some of these strategies in a multizone temperature testbed.
Self-constructing wavelet neural network algorithm for nonlinear control of large structures An adaptive control algorithm is presented for nonlinear vibration control of large structures subjected to dynamic loading. It is based on integration of a self-constructing wavelet neural network (SCWNN) developed specifically for structural system identification with an adaptive fuzzy sliding mode control approach. The algorithm is particularly suitable when the physical properties such as the stiffnesses and damping ratios of the structural system are unknown or partially known which is the case when a structure is subjected to an extreme dynamic event such as an earthquake as the structural properties change during the event. SCWNN is developed for functional approximation of the nonlinear behavior of large structures using neural networks and wavelets. In contrast to earlier work, the identification and control are processed simultaneously which makes the resulting adaptive control more applicable to real life situations. A two-part growing and pruning criterion is developed to construct the hidden layer in the neural network automatically. A fuzzy compensation controller is developed to reduce the chattering phenomenon. The robustness of the proposed algorithm is achieved by deriving a set of adaptive laws for determining the unknown parameters of wavelet neural networks using two Lyapunov functions. No offline training of neural network is necessary for the system identification process. In addition, the earthquake signals are considered as unidentified. This is particularly important for on-line vibration control of large civil structures since the external dynamic loading due to earthquake is not available in advance. The model is applied to vibration control of a continuous cast-in-place prestressed concrete box-girder bridge benchmark problem seismically excited highway.
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Local and global properties in networks of processors (Extended Abstract) This paper attempts to get at some of the fundamental properties of distributed computing by means of the following question: “How much does each processor in a network of processors need to know about its own identity, the identities of other processors, and the underlying connection network in order for the network to be able to carry out useful functions?” The approach we take is to require that the processors be designed without any knowledge (or only very broad knowledge) of the networks they are to be used in, and furthermore, that all processors with the same number of communication ports be identical. Given a particular network function, e.g., setting up a spanning tree, we ask whether processors may be designed so that when they are embedded in any connected network and started in some initial configuration, they are guaranteed to accomplish the desired function.
Mdvm System Concept, Paging Latency And Round-2 Randomized Leader Election Algorithm In Sg The future trend in the computing paradigm is marked by mobile computing based on mobile-client/server architecture connected by wireless communication network. However, the mobile computing systems have limitations because of the resource-thin mobile clients operating on battery power. The MDVM system allows the mobile clients to utilize memory and CPU resources of Server-Groups (SG) to overcome the resource limitations of clients in order to support the high-end mobile applications such as, m-commerce and virtual organization (VO). In this paper the concept ofMDVM system and the architecture of cellular network containing the SG are discussed. A round-2 randomized distributed algorithm is proposed to elect a unique leader and co-leader of the SG. The algorithm is free from any assumption about network topology, buffer space limitations and is based on dynamically elected coordinators eliminating single point of failure. The algorithm is implemented in distributed system setup and the network-paging latency values of wired and wireless networks are measured experimentally. The experimental results demonstrate that in most cases the algorithm successfully terminates in first round and the possibility of second round execution decreases significantly with the increase in the size of SG (vertical bar N-a vertical bar). The overall message complexity of the algorithm is O(vertical bar N-a vertical bar). The comparative study of network-paging latencies indicates that 3G/4G mobile communication systems would support the realization of MDVM system.
Sequential approximation of feasible parameter sets for identification with set membership uncertainty In this paper the problem of approximating the feasible parameter set for identification of a system in a set membership setting is considered. The system model is linear in the unknown parameters. A recursive procedure providing an approximation of the parameter set of interest through parallelotopes is presented, and an efficient algorithm is proposed. Its computational complexity is similar to that of the commonly used ellipsoidal approximation schemes. Numerical results are also reported on some simulation experiments conducted to assess the performance of the proposed algorithm.
A 10-Bit 800-MHz 19-mW CMOS ADC A pipelined ADC employs charge-steering op amps to relax the trade-offs among speed, noise, and power consumption. Applying full-rate nonlinearity and gain error calibration, a prototype realized in 65-nm CMOS technology achieves an SNDR of 52.2 dB at an input frequency of 399.2MHz and an FoM of 53 fJ/conversion-step.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.213333
0.213333
0.213333
0.213333
0.213333
0.213333
0.213333
0.06
0
0
0
0
0
0
Livia: Data-Centric Computing Throughout the Memory Hierarchy In order to scale, future systems will need to dramatically reduce data movement. Data movement is expensive in current designs because (i) traditional memory hierarchies force computation to happen unnecessarily far away from data and (ii) processing-in-memory approaches fail to exploit locality. We propose Memory Services, a flexible programming model that enables data-centric computing throughout the memory hierarchy. In Memory Services, applications express functionality as graphs of simple tasks, each task indicating the data it operates on. We design and evaluate Livia, a new system architecture for Memory Services that dynamically schedules tasks and data at the location in the memory hierarchy that minimizes overall data movement. Livia adds less than 3% area overhead to a tiled multicore and accelerates challenging irregular workloads by 1.3 × to 2.4 × while reducing dynamic energy by 1.2× to 4.7×.
Decentralized Offload-based Execution on Memory-centric Compute Cores.
Friends and neighbors on the Web The Internet has become a rich and large repository of information about us as individuals. Anything from the links and text on a user’s homepage to the mailing lists the user subscribes to are reflections of social interactions a user has in the real world. In this paper we devise techniques and tools to mine this information in order to extract social networks and the exogenous factors underlying the networks’ structure. In an analysis of two data sets, from Stanford University and the Massachusetts Institute of Technology (MIT), we show that some factors are better indicators of social connections than others, and that these indicators vary between user populations. Our techniques provide potential applications in automatically inferring real world connections and discovering, labeling, and characterizing communities.
Problem space search algorithms for resource-constrained project scheduling The Resource-Constrained Project Scheduling (RCPS) problem is a well known and challenging combinatorial optimization problem. It is a generalization of the Job Shop Scheduling problem and thus is NP-hard in the strong sense. Problem Space Search is a local search "metaheuristic" which has been shown to be effective for a variety of combinatorial optimization problems including Job Shop Scheduling. In this paper, we propose two problem space search heuristics for the RCPS problem. These heuristics are tested through intensive computational experiments on a 480-instance RCPS data set recently generated by Kolisch et al. [12]. Using this data set we compare our heuristics with a branch-and-bound algorithm developed by Demuelemeester and Herreolen [9]. The results produced by the heuristics are extremely encouraging, showing comparable performance to the branch-and-bound algorithm.
Decoupling Data Supply from Computation for Latency-Tolerant Communication in Heterogeneous Architectures. In today’s computers, heterogeneous processing is used to meet performance targets at manageable power. In adopting increased compute specialization, however, the relative amount of time spent on communication increases. System and software optimizations for communication often come at the costs of increased complexity and reduced portability. The Decoupled Supply-Compute (DeSC) approach offers a way to attack communication latency bottlenecks automatically, while maintaining good portability and low complexity. Our work expands prior Decoupled Access Execute techniques with hardware/software specialization. For a range of workloads, DeSC offers roughly 2 × speedup, and additional specialized compression optimizations reduce traffic between decoupled units by 40%.
Pipette: Improving Core Utilization on Irregular Applications through Intra-Core Pipeline Parallelism Applications with irregular memory accesses and control flow, such as graph algorithms and sparse linear algebra, use high-performance cores very poorly and suffer from dismal IPC. Instruction latencies are so large that even SMT cores running multiple data-parallel threads suffer poor utilization.We find that irregular applications have abundant pipeline parallelism that can be used to boost utilization: these applications can be structured as a pipeline of stages decoupled by queues. Queues hide latency very effectively when they allow producer stages to run far ahead of consumers. Prior work has proposed decoupled architectures, such as DAE and streaming multicores, that implement queues in hardware to exploit pipeline parallelism. Unfortunately, prior decoupled architectures are ill-suited to irregular applications, as they lack the control mechanisms needed to achieve decoupling, and target decoupling across cores but suffer from poor utilization within each core due to load imbalance across stages.We present Pipette, a technique that enables cheap pipeline parallelism within each core. Pipette decouples threads within the core using architecturally visible queues. Pipette’s ISA features control mechanisms that allow effective decoupling under irregular control flow. By time-multiplexing stages on the same core, Pipette avoids load imbalance and achieves high core IPC. Pipette’s novel implementation uses the physical register file to implement queues at very low cost, putting otherwise-idle registers to use. Pipette also adds cheap hardware to accelerate common access patterns, enabling fine-grain composition of accelerated accesses and general-purpose computation. As a result, Pipette outperforms data-parallel implementations of several challenging irregular applications by gmean 1.9× (and up to 3.9×).
Ultra-Elastic CGRAs for Irregular Loop Specialization Reconfigurable accelerator fabrics, including coarse-grain reconfigurable arrays (CGRAs), have experienced a resurgence in interest because they allow fast-paced software algorithm development to continue evolving post-fabrication. CGRAs traditionally target regular workloads with data-level parallelism (e.g., neural networks, image processing), but once integrated into an SoC they remain idle and...
In-Memory Data Parallel Processor. Recent developments in Non-Volatile Memories (NVMs) have opened up a new horizon for in-memory computing. Despite the significant performance gain offered by computational NVMs, previous works have relied on manual mapping of specialized kernels to the memory arrays, making it infeasible to execute more general workloads. We combat this problem by proposing a programmable in-memory processor architecture and data-parallel programming framework. The efficiency of the proposed in-memory processor comes from two sources: massive parallelism and reduction in data movement. A compact instruction set provides generalized computation capabilities for the memory array. The proposed programming framework seeks to leverage the underlying parallelism in the hardware by merging the concepts of data-flow and vector processing. To facilitate in-memory programming, we develop a compilation framework that takes a TensorFlow input and generates code for our in-memory processor. Our results demonstrate 7.5x speedup over a multi-core CPU server for a set of applications from Parsec and 763x speedup over a server-class GPU for a set of Rodinia benchmarks.
The GPU Computing Era GPU computing is at a tipping point, becoming more widely used in demanding consumer applications and high-performance computing. This article describes the rapid evolution of GPU architectures—from graphics processors to massively parallel many-core multiprocessors, recent developments in GPU computing architectures, and how the enthusiastic adoption of CPU+GPU coprocessing is accelerating parallel applications.
An ultra-wideband CMOS low noise amplifier for 3-5-GHz UWB system An ultra-wideband (UWB) CMOS low noise amplifier (LNA) topology that combines a narrowband LNA with a resistive shunt-feedback is proposed. The resistive shunt-feedback provides wideband input matching with small noise figure (NF) degradation by reducing the Q-factor of the narrowband LNA input and flattens the passband gain. The proposed UWB amplifier is implemented in 0.18-/spl mu/m CMOS technol...
Digital Background Correction of Harmonic Distortion in Pipelined ADCs. Pipelined analog-to-digital converters (ADCs) are sensitive to distortion introduced by the residue amplifiers in their first few stages. Unfortunately, residue amplifier distortion tends to be inversely related to power consumption in practice, so the residue amplifiers usually are the dominant consumers of power in high-resolution pipelined ADCs. This paper presents a background calibration tech...
Design Techniques for a 66 Gb/s 46 mW 3-Tap Decision Feedback Equalizer in 65 nm CMOS. This paper analyzes and describes design techniques enabling energy-efficient multi-tap decision feedback equalizers operated at or near the speed limits of the technology. We propose a closed-loop architecture utilizing three techniques to achieve this goal, namely a merged latch and summer, reduced latch gain, and a dynamic latch design. A 65 nm CMOS 3-tap implementation of the proposed architec...
BarrierPoint: Sampled simulation of multi-threaded applications Sampling is a well-known technique to speed up architectural simulation of long-running workloads while maintaining accurate performance predictions. A number of sampling techniques have recently been developed that extend well-known single-threaded techniques to allow sampled simulation of multi-threaded applications. Unfortunately, prior work is limited to non-synchronizing applications (e.g., server throughput workloads); requires the functional simulation of the entire application using a detailed cache hierarchy which limits the overall simulation speedup potential; leads to different units of work across different processor architectures which complicates performance analysis; or, requires massive machine resources to achieve reasonable simulation speedups. In this work, we propose BarrierPoint, a sampling methodology to accelerate simulation by leveraging globally synchronizing barriers in multi-threaded applications. BarrierPoint collects microarchitecture-independent code and data signatures to determine the most representative inter-barrier regions, called barrierpoints. BarrierPoint estimates total application execution time (and other performance metrics of interest) through detailed simulation of these barrierpoints only, leading to substantial simulation speedups. Barrierpoints can be simulated in parallel, use fewer simulation resources, and define fixed units of work to be used in performance comparisons across processor architectures. Our evaluation of BarrierPoint using NPB and Parsec benchmarks reports average simulation speedups of 24.7× (and up to 866.6×) with an average simulation error of 0.9% and 2.9% at most. On average, BarrierPoint reduces the number of simulation machine resources needed by 78×.
An Event-Driven Quasi-Level-Crossing Delta Modulator Based on Residue Quantization This article introduces a digitally intensive event-driven quasi-level-crossing (quasi-LC) delta-modulator analog-to-digital converter (ADC) with adaptive resolution (AR) for Internet of Things (IoT) wireless networks, in which minimizing the average sampling rate for sparse input signals can significantly reduce the power consumed in data transmission, processing, and storage. The proposed AR quasi-LC delta modulator quantizes the residue voltage signal with a 4-bit asynchronous successive-approximation-register (SAR) sub-ADC, which enables a straightforward implementation of LC and AR algorithms in the digital domain. The proposed modulator achieves data compression by means of a globally signal-dependent average sampling rate and achieves AR through a digital multi-level comparison window that overcomes the tradeoff between the dynamic range and the input bandwidth in the conventional LC ADCs. Engaging the AR algorithm reduces the average sampling rate by a factor of 3 at the edge of the modulator’s signal bandwidth. The proposed modulator is fabricated in 28-nm CMOS and achieves a peak SNDR of 53 dB over a signal bandwidth of 1.42 MHz while consuming 205 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> and an active area of 0.0126 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> .
1.033333
0.033333
0.033333
0.033333
0.033333
0.033333
0.016667
0.004167
0
0
0
0
0
0
Spectrum load smoothing for cognitive medium access in open spectrum Today's framework for radio spectrum regulation and the way the usage of radio spectrum is coordinated, is undergoing vital changes. In the face of scarce radio resources, regulators, industry, and the research community are initiating promising approaches towards a more flexible spectrum usage, referred to as open spectrum. In this paper we discuss medium access control protocols for spectrum agile radios that opportunistically use spectrum, also referred to as "cognitive radio". Spectrum agile radios operate in parts of the spectrum originally licensed to other radio services. They identify free spectrum, coordinate its usage and release it when this is required by licensed radio systems. The application of "waterfilling" from the information theory, referred to as spectrum load smoothing (SLS), and its realization in IEEE 802.11e-based spectrum agile wireless networks is examined in this paper. The SLS, as intelligent principle of spectrum usage, targets at the distributed quality-of-service support in scenarios of coexisting spectrum agile radios. With SLS, spectrum agile radios observe the past usage of the spectrum, while at the same time a harmful interference to license holding radio systems is avoided. The SLS can therefore be referred to as cognitive medium access. In this paper, the capability to support quality-of-service in the presence of other, competing spectrum agile networks and the protection of licensed radio networks are evaluated with the help of simulation. The efficiency of SLS for open spectrum access is demonstrated
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Light-Load Efficient Fully Integrated Voltage Regulator in 14-nm CMOS With 2.5-nH Package-Embedded Air-Core Inductors Fully integrated voltage regulators (FIVRs) offer many advantages, such as fine-grained power management, fast transient response, and reduced form factor. This article addresses light-load efficiency in FIVRs with nH-scale air-core inductors. The challenges of implementing efficient discontinuous conduction mode (DCM) operation at high switching frequencies are discussed, which include zero current detection, inductor ac-loss effects, and power delivery network (PDN) resonances. A prototype in 14-nm CMOS is presented, which shows the DCM operation at up to 70 MHz with a peak efficiency of 88% for 1.6–1.2-V conversion.
Circuit Techniques for High Efficiency Fully-Integrated Switched-Capacitor Converters This brief presents a brief tutorial on recent circuit design techniques for high efficiency fully-integrated switched-capacitor (SC) converters. Design challenges for fully-integrated SC converters are highlighted, followed by consideration of tradeoffs among topology generation, parasitic loss reduction, clock generation and distribution, and close-loop regulation. Circuit techniques and design guidelines are suggested.
Design of Soft-Charging Switched-Capacitor DC-DC Converters Using Stage Outphasing and Multiphase Soft-Charging. In this paper, two techniques, called stage-outphasing (SO) and multiphase soft-charging (MSC), are introduced, which make use of the advanced multiphasing concept to soft-charge charge transfers between flying capacitors. As such, the charge sharing losses of fully integrated switched-capacitor (SC) converters are reduced, leading to better capacitance utilization, higher efficiency, and higher p...
Algorithmic Voltage-Feed-In Topology for Fully Integrated Fine-Grained Rational Buck-Boost Switched-Capacitor DC-DC Converters. We propose an algorithmic voltage-feed-in (AVFI) topology capable of systematic generation of any arbitrary buck-boost rational ratio with optimal conduction loss while achieving reduced topology-level parasitic loss among the state-of-the-art works. By disengaging the existing topology-level restrictions, we develop a cell-level implementation using the extracted Dickson cell (DSC) and charge-pat...
Digital 2-/3-Phase Switched-Capacitor Converter With Ripple Reduction and Efficiency Improvement. This paper presents a digitally controlled 2-/3-phase 6-ratio switched-capacitor (SC) dc-dc converter with low output voltage ripple and high efficiency. To achieve wide input and output voltage ranges, six voltage conversion ratios are generated with only two discrete flying capacitors by using both 2and 3-phase operations. An adaptive ripple reduction scheme is proposed to achieve up to four tim...
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Computing size-independent matrix problems on systolic array processors A methodology to transform dense to band matrices is presented in this paper. This transformation, is accomplished by triangular blocks partitioning, and allows the implementation of solutions to problems with any given size, by means of contraflow systolic arrays, originally proposed by H.T. Kung. Matrix-vector and matrix-matrix multiplications are the operations considered here.The proposed transformations allow the optimal utilization of processing elements (PEs) of the systolic array when dense matrix are operated. Every computation is made inside the array by using adequate feedback. The feedback delay time depends only on the systolic array size.
A 12 bit 2.9 GS/s DAC With IM3 $ ≪ -$ 60 dBc Beyond 1 GHz in 65 nm CMOS A 12 bit 2.9 GS/s current-steering DAC implemented in 65 nm CMOS is presented, with an IM3 < ¿-60 dBc beyond 1 GHz while driving a 50 ¿ load with an output swing of 2.5 Vppd and dissipating a power of 188 mW. The SFDR measured at 2.9  GS/s is better than 60 dB beyond 340 MHz while the SFDR measured at 1.6 GS/s is better than 60 dB beyond 440 MHz. The increase in performance at high-frequencies, co...
The M-Machine multicomputer The M-Machine is an experimental multicomputer being developed to test architectural concepts motivated by the constraints of modern semiconductor technology and the demands of programming systems. The M-Machine computing nodes are connected with a 3-D mesh network; each node is a multithreaded processor incorporating 9 function units, on-chip cache, and local memory. The multiple function units are used to exploit both instruction-level and thread-level parallelism. A user accessible message passing system yields fast communication and synchronization between nodes. Rapid access to remote memory is provided transparently to the user with a combination of hardware and software mechanisms. This paper presents the architecture of the M-Machine and describes how its mechanisms attempt to maximize both single thread performance and overall system throughput. The architecture is complete and the MAP chip, which will serve as the M-Machine processing node, is currently being implemented.
SPONGENT: a lightweight hash function This paper proposes spongent - a family of lightweight hash functions with hash sizes of 88 (for preimage resistance only), 128, 160, 224, and 256 bits based on a sponge construction instantiated with a present-type permutation, following the hermetic sponge strategy. Its smallest implementations in ASIC require 738, 1060, 1329, 1728, and 1950 GE, respectively. To our best knowledge, at all security levels attained, it is the hash function with the smallest footprint in hardware published so far, the parameter being highly technology dependent. spongent offers a lot of flexibility in terms of serialization degree and speed. We explore some of its numerous implementation trade-offs. We furthermore present a security analysis of spongent. Basing the design on a present-type primitive provides confidence in its security with respect to the most important attacks. Several dedicated attack approaches are also investigated.
MicroGP—An Evolutionary Assembly Program Generator This paper describes 驴GP, an evolutionary approach for generating assembly programs tuned for a specific microprocessor. The approach is based on three clearly separated blocks: an evolutionary core, an instruction library and an external evaluator. The evolutionary core conducts adaptive population-based search. The instruction library is used to map individuals to valid assembly language programs. The external evaluator simulates the assembly program, providing the necessary feedback to the evolutionary core. 驴GP has some distinctive features that allow its use in specific contexts. This paper focuses on one such context: test program generation for design validation of microprocessors. Reported results show 驴GP being used to validate a complex 5-stage pipelined microprocessor. Its induced test programs outperform an exhaustive functional test and an instruction randomizer, showing that engineers are able to automatically obtain high-quality test programs.
The accelerator store: A shared memory framework for accelerator-based systems This paper presents the many-accelerator architecture, a design approach combining the scalability of homogeneous multi-core architectures and system-on-chip's high performance and power-efficient hardware accelerators. In preparation for systems containing tens or hundreds of accelerators, we characterize a diverse pool of accelerators and find each contains significant amounts of SRAM memory (up to 90&percnt; of their area). We take advantage of this discovery and introduce the accelerator store, a scalable architectural component to minimize accelerator area by sharing its memories between accelerators. We evaluate the accelerator store for two applications and find significant system area reductions (30&percnt;) in exchange for small overheads (2&percnt; performance, 0&percnt;--8&percnt; energy). The paper also identifies new research directions enabled by the accelerator store and the many-accelerator architecture.
A 12.8 GS/s Time-Interleaved ADC With 25 GHz Effective Resolution Bandwidth and 4.6 ENOB This paper presents a 12.8 GS/s 32-way hierarchically time-interleaved SAR ADC with 4.6 ENOB in 65 nm CMOS. The prototype utilizes hierarchical sampling and cascode sampler circuits to enable greater than 25 GHz 3 dB effective resolution bandwidth (ERBW). We further employ a pseudo-differential SAR ADC to save power and area. The core circuit occupies only 0.23 mm 2 and consumes a total of 162 mW from dual 1.2 V/1.1 V supplies. The design achieves a SNDR of 29.4 dB at low frequencies and 26.4 dB at 25 GHz, resulting in a figure-of-merit of 0.79 pJ/conversion-step. As will be further described in the paper, the circuit architecture used in this prototype enables expansion to 25.6 GS/s or 51.2 GS/s via additional interleaving without significantly impacting ERBW.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.1
0.066667
0.025
0
0
0
0
0
0
0
0
0
Wide-Band CMOS Low-Noise Amplifier Exploiting Thermal Noise Canceling Known elementary wide-band amplifiers suffer from a fundamental tradeoff between noise figure (NF) and source impedance matching, which limits the NF to values typically above 3 dB. Global negative feedback can be used to break this tradeoff, however, at the price of potential instability. In contrast, this paper presents a feedforward noise-canceling technique, which allows for simultaneous noise...
Analysis and design of common-gate low-noise amplifier for wideband applications The design of a common-gate (CG) LNA for the wideband applications is discussed in this paper. The effect of the different components in matching network is analyzed in detail. The design of a wideband input matching and output signal current for the input stage is presented. In addition, the effect of the matching network on the linearity and noise of a CG stage is studied. A design example is given to demonstrate the effectiveness of the presented theory. Copyright © 2008 John Wiley & Sons, Ltd.
A low power UWB very low noise amplifier using an improved noise reduction technique A single-ended ultra wideband (UWB) low noise amplifier (LNA) employing an improved noise reduction (INR) technique is presented. The INR consists of three main components encompassing active positive feedback, input matching extender and transformer. The input matching extender helps to preserve the input return loss (S11) less than -10 dB over entire bandwidth from 2 to 7.6 GHz. Using active positive feedback and transformer reduces the noise figure (NF) significantly. Moreover, compare to the version with no transformer and with the same gain, with the aim of transformer power consumption is reduced to about half. Simulated in a 0.13-μm RF CMOS technology, the proposed LNA achieves a power gain of 13.8 dB with only 0.7 dB variation and the NF is between 1.85-2.1 dB over whole bandwidth while consumes only 2.15 mW dc power from a 1.1 V supply voltage.
Design of low power CMOS ultra wide band low noise amplifier using noise canceling technique This paper presents a design of a low power CMOS ultra-wideband (UWB) low noise amplifier (LNA) using a noise canceling technique with the TSMC 0.18@mm RF CMOS process. The proposed UWB LNA employs a current-reused structure to decrease the total power consumption instead of using a cascade stage. This structure spends the same DC current for operating two transistors simultaneously. The stagger-tuning technique, which was reported to achieve gain flatness in the required frequency, was adopted to have low and high resonance frequency points over the entire bandwidth from 3.1 to 10.6GHz. The resonance points were set in 3GHz and 10GHz to provide enough gain flatness and return loss. In addition, the noise canceling technique was used to cancel the dominant noise source, which is generated by the first transistor. The simulation results show a flat gain (S"2"110dB) with a good input impedance matching less than -10dB and a minimum noise figure of 2.9dB over the entire band. The proposed UWB LNA consumed 15.2mW from a 1.8V power supply.
A Wideband CMOS Low Noise Amplifier Employing Noise and IM2 Distortion Cancellation for a Digital TV Tuner A wideband CMOS low noise amplifier (LNA) with single-ended input and output employing noise and IM2 distortion cancellation for a digital terrestrial and cable TV tuner is presented. By adopting a noise canceling structure combining a common source amplifier and a common gate amplifier by current amplification, the LNA obtains a low noise figure and high IIP3. IIP2 as well as IIP3 of the LNA is important in broadband systems, especially digital terrestrial and cable TV applications. Accordingly, in order to overcome the poor IIP2 performance of conventional LNAs with single-ended input and output and avoid the use of external and bulky passive transformers along with high sensitivity, an IM2 distortion cancellation technique exploiting the complementary RF performance of NMOS and PMOS while retaining thermal noise canceling is adopted in the LNA. The proposed LNA is implemented in a 0.18 mum CMOS process and achieves a power gain of 14 dB, an average noise figure of 3 dB, an IIP3 of 3 dBm, an IIP2 of 44 dBm at maximum gain, and S11 of under -9 dB in a frequency range from 50 MHz to 880 MHz. The power consumption is 34.8 mW at 2.2 V and the chip area is 0.16 mm2.
A Reconfigurable Narrow-Band MB-OFDM UWB Receiver Architecture This paper presents an analysis on the receiver front-end architectures for multiband orthogonal frequency-division multiplexing ultra-wide-band (UWB) terminals. An interference analysis is carried out in order to derive the main linearity specifications of the receiver front-end. A reconfigurable narrow-band architecture is introduced that can best cope with the main challenges of the UWB receivers: broadband impedance matching and high out-of-band linearity. Simulation results show that linearity requirements can be achieved with sizeable margin.
A Fully Differential Band-Selective Low-Noise Amplifier for MB-OFDM UWB Receivers A band-selective low-noise amplifier (BS-LNA) for multiband orthogonal frequency-division multiplexing ultra-wide-band (UWB) receivers is presented. A switched capacitive network that controls the resonant frequency of the LC load for the band selection is used. It greatly enhances the gain and noise performance of the LNA in each frequency band without increasing power consumption. Moreover, a fu...
An Ultra-Wide-Band 0.4-10-Ghz Lna In 0.18-Mu M Cmos A two-stage ultra-wide-band CMOS low-noise amplifier (LNA). is presented. With the common-gate configuration employed as the input stage, the broad-band input matching is obtained and the noise does not rise rapidly at higher frequency. By combining the common-gate and common-source stages, the broad-band characteristic and small area are achieved by using two inductors. This LNA has been fabricated in a 0.18-mu m CMOS process. The measured power gain is 11.2-12.4 dB and noise figure is 4.4-6.5 dB with -3-dB bandwidth of 0.4-10 GHz. The measured HP3 is - 6 dBm at 6 GHz. It consumes 12 mW from a 1.8-V supply voltage and occupies only 0.42 mm(2).
Switched-capacitor track-and-hold amplifier with low sensitivity to op-amp imperfections This paper describes a high-precision switched-capacitor (SC) track-and-hold amplifier (THA) stage. It uses a novel continuous-time correlated double sampling (CDS) scheme to desensitize the operation to amplifier imperfections. Unlike earlier predictive-CDS THAs, the circuit does not need a sample-and-held input signal for its operation. During the tracking period, an auxiliary continuous-time signal path is established, which predicts the output voltage during the holding period. This allows accurate operation even for low amplifier gains and large offsets over a wide input frequency range. Extensive simulations were performed to compare the performance of the proposed THA with earlier circuits utilizing CDS. The results verify that its operation is far more robust than that of any previously described THA.
The PARSEC benchmark suite: characterization and architectural implications This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
Distributed computation in dynamic networks In this paper we investigate distributed computation in dynamic networks in which the network topology changes from round to round. We consider a worst-case model in which the communication links for each round are chosen by an adversary, and nodes do not know who their neighbors for the current round are before they broadcast their messages. The model captures mobile networks and wireless networks, in which mobility and interference render communication unpredictable. In contrast to much of the existing work on dynamic networks, we do not assume that the network eventually stops changing; we require correctness and termination even in networks that change continually. We introduce a stability property called T -interval connectivity (for T = 1), which stipulates that for every T consecutive rounds there exists a stable connected spanning subgraph. For T = 1 this means that the graph is connected in every round, but changes arbitrarily between rounds. We show that in 1-interval connected graphs it is possible for nodes to determine the size of the network and compute any com- putable function of their initial inputs in O(n2) rounds using messages of size O(log n + d), where d is the size of the input to a single node. Further, if the graph is T-interval connected for T 1, the computation can be sped up by a factor of T, and any function can be computed in O(n + n2/T) rounds using messages of size O(log n + d). We also give two lower bounds on the token dissemination problem, which requires the nodes to disseminate k pieces of information to all the nodes in the network. The T-interval connected dynamic graph model is a novel model, which we believe opens new avenues for research in the theory of distributed computing in wireless, mobile and dynamic networks.
An adaptive stabilization framework for distributed hash tables Distributed Hash Tables (DHT) algorithms obtain good lookup performance bounds by using deterministic rules to organize peer nodes into an overlay network. To preserve the invariants of the overlay network, DHTs use stabilization procedures that reorganize the topology graph when participating nodes join or fail. Most DHTs use periodic stabilization, in which peers perform stabilization at fixed intervals of time, disregarding the rate of change in overlay topology; this may lead to poor performance and large stabilization-induced communication overhead. We propose a novel adaptive stabilization framework that takes into consideration the continuous evolution in network conditions. Each peer collects statistical data about the network and dynamically adjusts its stabilization rate based on the analysis of the data. The objective of our scheme is to maintain nominal network performance and to minimize the communication overhead of stabilization.
A Multiphase Buck Converter With a Rotating Phase-Shedding Scheme For Efficient Light-Load Control Mobile devices need to minimize their power consumption in order to maximize battery runtime, except during short extremely busy periods. This requirement makes dc-dc converters usually operate in standby mode or under light-load conditions. Therefore, implementation of an efficient regulation scheme under a light load is a key aspect of dc-dc converter design. This paper presents a multiphase buck converter with a rotating phase-shedding scheme for efficient light-load control. The converter includes four phases operating in an interleaved manner in order to supply high current with low output ripple. The multiphase converter implements a rotating phase-shedding scheme to distribute the switching activity concentrated on a single phase, resulting in a distribution of the aging effects among the phases instead of a single phase. The proposed multiphase buck converter was fabricated using a 0.18 μm bipolar CMOS DMOS process. The supply voltage ranges from 2.7 V to 5 V, and the maximum allowable output current is 4.5 A.
A 0.5 V 10-bit 3 MS/s SAR ADC With Adaptive-Reset Switching Scheme and Near-Threshold Voltage-Optimized Design Technique This brief presents a 10-bit ultra-low power energy-efficient successive approximation register (SAR) analog-to-digital converter (ADC). A new adaptive-reset switching scheme is proposed to reduce the switching energy of the capacitive digital-to-analog converter (CDAC). The proposed adaptive-reset switching scheme reduces the average switching energy of the CDAC by 90% compared to the conventional scheme without the common-mode voltage variation. In addition, the near-threshold voltage (NTV)-optimized digital library is adopted to alleviate the performance degradation in the ultra-low supply voltage while simultaneously increasing the energy efficiency. The NTV-optimized design technique is also introduced to the bootstrapped switch design to improve the linearity of the sample-and-hold circuit. The test chip is fabricated in a 65 nm CMOS, and its core area is 0.022 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . At a supply of 0.5 V and sampling speed of 3 MS/s, the SAR ADC achieves an ENOB of 8.78 bit and consumes <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3.09~{\boldsymbol{\mu }}\text{W}$ </tex-math></inline-formula> . The resultant Walden figure-of-merit (FoM) is 2.34 fJ/conv.-step.
1.004887
0.008163
0.008163
0.005061
0.004175
0.004082
0.002336
0.001042
0.000011
0
0
0
0
0
The Mondrian Data Engine. The increasing demand for extracting value out of ever-growing data poses an ongoing challenge to system designers, a task only made trickier by the end of Dennard scaling. As the performance density of traditional CPU-centric architectures stagnates, advancing compute capabilities necessitates novel architectural approaches. Near-memory processing (NMP) architectures are reemerging as promising candidates to improve computing efficiency through tight coupling of logic and memory. NMP architectures are especially fitting for data analytics, as they provide immense bandwidth to memory-resident data and dramatically reduce data movement, the main source of energy consumption. Modern data analytics operators are optimized for CPU execution and hence rely on large caches and employ random memory accesses. In the context of NMP, such random accesses result in wasteful DRAM row buffer activations that account for a significant fraction of the total memory access energy. In addition, utilizing NMP's ample bandwidth with fine-grained random accesses requires complex hardware that cannot be accommodated under NMP's tight area and power constraints. Our thesis is that efficient NMP calls for an algorithm-hardware co-design that favors algorithms with sequential accesses to enable simple hardware that accesses memory in streams. We introduce an instance of such a co-designed NMP architecture for data analytics, the Mondrian Data Engine. Compared to a CPU-centric and a baseline NMP system, the Mondrian Data Engine improves the performance of basic data analytics operators by up to 49x and 5x, and efficiency by up to 28x and 5x, respectively.
GP-SIMD Processing-in-Memory GP-SIMD, a novel hybrid general-purpose SIMD computer architecture, resolves the issue of data synchronization by in-memory computing through combining data storage and massively parallel processing. GP-SIMD employs a two-dimensional access memory with modified SRAM storage cells and a bit-serial processing unit per each memory row. An analytic performance model of the GP-SIMD architecture is presented, comparing it to associative processor and to conventional SIMD architectures. Cycle-accurate simulation of four workloads supports the analytical comparison. Assuming a moderate die area, GP-SIMD architecture outperforms both the associative processor and conventional SIMD coprocessor architectures by almost an order of magnitude while consuming less power.
Evolution of Memory Architecture Computer memories continue to serve the role that they first served in the electronic discrete variable automatic computer (EDVAC) machine documented by John von Neumann, namely that of supplying instructions and operands for calculations in a timely manner. As technology has made possible significantly larger and faster machines with multiple processors, the relative distance in processor cycles ...
Rebooting the Data Access Hierarchy of Computing Systems We have been experiencing two very important movements in computing. On the one hand, a tremendous amount of resource has been invested into innovative applications such as first-principle-based methods, deep learning and cognitive computing. On the other hand, the industry has been taking a technological path where application performance and energy efficiency vary by more than two orders of magnitude depending on their parallelism, heterogeneity, and locality. We envision that a "perfect storm" is coming because of the interaction between these two movements. Many of these new and high-valued applications need to touch a very large amount of data with little data reuse and data movement has become the dominating factor for both power and performance of these applications. It will be critical to match the compute throughput to the data access bandwidth and to locate the compute near data. Much has been and continuously needs to be learned about algorithms, languages, compilers and hardware architecture in this movement. What are the killer applications that may become the new driver for future technology development? How hard is it to program existing systems to address the data movement issues today? How will we program these systems in the future? How will innovations in memory devices present further opportunities and challenges in designing new systems? What is the impact on long-term software engineering cost of applications? In this paper, we present some lessons learned as we design the IBM-Illinois C3SR (Center for Cognitive Computing Systems Research) Erudite system inside this perfect storm.
Hyper-Ap: Enhancing Associative Processing Through A Full-Stack Optimization Associative processing (AP) is a promising PIM paradigm that overcomes the von Neumann bottleneck (memory wall) by virtue of a radically different execution model. By decomposing arbitrary computations into a sequence of primitive memory operations (i.e., search and write), AP’s execution model supports concurrent SIMD computations in-situ in the memory array to eliminate the need for data movement. This execution model also provides a native support for flexible data types and only requires a minimal modification on the existing memory design (low hardware complexity). Despite these advantages, the execution model of AP has two limitations that substantially increase the execution time, i.e., 1) it can only search a single pattern in one search operation and 2) it needs to perform a write operation after each search operation. In this paper, we propose the Highly Performant Associative Processor (Hyper- AP) to fully address the aforementioned limitations. The core of Hyper- AP is an enhanced execution model that reduces the number of search and write operations needed for computations, thereby reducing the execution time. This execution model is generic and improves the performance for both CMOS-based and RRAM-based AP, but it is more beneficial for the RRAMbased AP due to the substantially reduced write operations. We then provide complete architecture and micro-architecture with several optimizations to efficiently implement Hyper-AP. In order to reduce the programming complexity, we also develop a compilation framework so that users can write C-like programs with several constraints to run applications on Hyper- AP. Several optimizations have been applied in the compilation process to exploit the unique properties of Hyper- AP. Our experimental results show that, compared with the recent work IMP, Hyper- AP achieves up to 54×/4.4× better power-/area-efficiency for various representative arithmetic operations. For the evaluated benchmarks, Hyper-AP achieves 3.3× speedup and 23.8× energy reduction on average compared with IMP. Our evaluation also confirms that the proposed execution model is more beneficial for the RRAM-based AP than its CMOS-based counterpart.
3.2 Zen: A next-generation high-performance ×86 core Codenamed “Zen”, AMD's next-generation, high-performance ×86 core targets server, desktop, and mobile client applications. Utilizing Global Foundries' energy-efficient 14nm LPP FinFET process, the 44mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> Zen core complex unit (CCX) has 1.4B transistors and contains a shared 8MB L3 cache and four cores (Fig. 3.2.7). The 7mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> Zen core contains a dedicated 0.5MB L2 cache, 32KB L1 data cache, and 64KB L1 instruction cache. Each core has a digital low drop-out (LDO) voltage regulator and digital frequency synthesizer (DFS) to independently vary frequency and voltage across power states.
ELP2IM: Efficient and Low Power Bitwise Operation Processing in DRAM Recently proposed DRAM based memory-centric architectures have demonstrated their great potentials in addressing the memory wall challenge of modern computing systems. Such architectures exploit charge sharing of multiple rows to enable in-memory bitwise operations. However, existing designs rely heavily on reserved rows to implement computation, which introduces high data movement overhead, large operation latency, large energy consumption, and low operation reliability. In this paper, we propose ELP <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> IM, an efficient and low power processing in-memory architecture, to address the above issues. ELP <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> IM utilizes two stable states of sense amplifiers in DRAM subarrays so that it can effectively reduce the number of intra-subarray data movements as well as the number of concurrently opened DRAM rows, which exhibits great performance and energy consumption advantages over existing designs. Our experimental results show that the power efficiency of ELP <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> IM is more than 2× improvement over the state-of-the-art DRAM based memory-centric designs in real application.
OpenCL: A Parallel Programming Standard for Heterogeneous Computing Systems The OpenCL standard offers a common API for program execution on systems composed of different types of computational devices such as multicore CPUs, GPUs, or other accelerators.
The accelerator store: A shared memory framework for accelerator-based systems This paper presents the many-accelerator architecture, a design approach combining the scalability of homogeneous multi-core architectures and system-on-chip's high performance and power-efficient hardware accelerators. In preparation for systems containing tens or hundreds of accelerators, we characterize a diverse pool of accelerators and find each contains significant amounts of SRAM memory (up to 90&percnt; of their area). We take advantage of this discovery and introduce the accelerator store, a scalable architectural component to minimize accelerator area by sharing its memories between accelerators. We evaluate the accelerator store for two applications and find significant system area reductions (30&percnt;) in exchange for small overheads (2&percnt; performance, 0&percnt;--8&percnt; energy). The paper also identifies new research directions enabled by the accelerator store and the many-accelerator architecture.
Multilevel k-way hypergraph partitioning In this paper, we present a new multilevel k-way hypergraph parti- tioning algorithm that substantially outperforms the existing state- of-the-art K-PM/LR algorithm for multi-way partitioning. both for optimizing local as well as global objectives. Experiments on the ISPD98 benchmark suite show that the partitionings produced by our scheme are on the average 15% to 23% better than those pro- duced by the K-PM/LR algorithm, both in terms of the hyperedge cut as well as the K 1 metric. Furthermore, our algorithm is sig- nificantly faster, requiring 4 to 5 times less time than that required by K-PM/LR.
Stability of switched positive linear systems with average dwell time switching. In this paper, the stability analysis problem for a class of switched positive linear systems (SPLSs) with average dwell time switching is investigated. A multiple linear copositive Lyapunov function (MLCLF) is first introduced, by which the sufficient stability criteria in terms of a set of linear matrix inequalities, are given for the underlying systems in both continuous-time and discrete-time contexts. The stability results for the SPLSs under arbitrary switching, which have been previously studied in the literature, can be easily obtained by reducing MLCLF to the common linear copositive Lyapunov function used for the system under arbitrary switching those systems. Finally, a numerical example is given to show the effectiveness and advantages of the proposed techniques.
Mismatch-based timing errors in current steering DACs Current Steering Digital-to-Analog Converters (CS-DAC) are important ingredients in many high-speed data converters. Various types of timing errors such as mismatch based timing errors limit broad-band performance. A framework of timing errors is presented here and it is used to analyze these errors. The extracted relationship between performance, block requirements and architecture (e.g segmentation) gives insight on design tradeoffs in Nyquist DACs and multi-bit current-based ΣΔ Modulators.
Lossy data compression using FDCT for haptic communication In this paper, a DCT-based lossy haptic data compression method for a haptic communication systems is proposed to reduce the data size flowing between a master and a slave system. The calculation load for the DCT can be high, and the performance and the stability of the system can deteriorate due to the high calculation load. In order to keep the system a hard real-time system and the performance high, a fast calculation algorithm for DCT is adopted, and the calculation load is balanced for several sampling periods. The time delay introduced through the compression/expansion of the haptic data is predictable and constant. The time delay, therefore, can be compensated by a time delay compensator. Furthermore, since the delay in this paper is small enough, stable contact with a hard environment is achieved without any time delay compensator. The validity of the proposed lossy haptic data compression method is shown through simulation and experimental results.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.033709
0.033333
0.033333
0.033333
0.033333
0.033333
0.02381
0.008896
0.00014
0
0
0
0
0
A Modelling and Nonlinear Equalization Technique for a 20 Gb/s 0.77 pJ/b VCSEL Transmitter in 32 nm SOI CMOS. This paper describes an ultralow-power VCSEL transmitter in 32 nm SOI CMOS. To increase its power efficiency, the VCSEL is driven at a low bias current. Driving the VCSEL in this condition increases its inherent nonlinearity. Conventional pre-emphasis techniques cannot compensate for this effect because they have a linear response. To overcome this limitation, a nonlinear equalization scheme is pr...
A 64-Gb/s 4-PAM Transceiver Utilizing an Adaptive Threshold ADC in 16-nm FinFET. A 64-Gb/s 4-pulse-amplitude modulation (PAM) transceiver fabricated with a 16-nm fin field effect transistor (FinFET) technology is presented with a power consumption that scales with link loss. The transmitter (TX) includes a three-tap feed-forward equalizer (FFE) (one pre and one post) achieving a level separation mismatch ratio (RLM) of 99% and a random jitter (RJ) of 162-fs rms. The maximum swing is 1.1 V <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">ppd</sub> at a power consumption of 89.7 mW including clock distribution from a 1.2-V supply, corresponding to 1.39 pJ/bit. The receiver analog front end (RX-AFE) consists of a half-rate (HR) sampling continuous-time linear equalizer (CTLE) and 6-bit flash (1-bit folding) analog-to-digital converter (ADC) capable of non-uniform quantization. The non-uniform thresholds are selected based on a greedy search approach which allows the RX to reduce power at low channel loss in a highly granular manner and achieves better bit error rate (BER) than a uniform quantizer. For a channel with −8.6-dB loss at Nyquist, ADC can be configured in 2-bit mode, achieving BER < 1e – 6 at an RX-AFE power consumption of 100 mW. For a −29.5-dB loss channel, the RX-AFE consumes 283.9 mW and achieves a BER < 1e – 4 in conjunction with a software digital equalizer. For a −13.5-dB loss channel, a greedy search is used to optimize the quantization threshold levels, achieving an order of magnitude improvement in BER compared to uniform quantization.
A 60-Gb/s PAM4 Wireline Receiver With 2-Tap Direct Decision Feedback Equalization Employing Track-and-Regenerate Slicers in 28-nm CMOS This article describes a 4-level pulse amplitude modulation (PAM4) receiver incorporating continuous time linear equalizers (CTLEs) and a 2-tap direct decision feedback equalizer (DFE) for applications in wireline communication. A CMOS track-and-regenerate slicer is proposed and employed in the PAM4 receiver. The proposed slicer is designed for the purposes of improving the clock-to-Q delay as well as the output signal swing. A direct DFE in a PAM4 receiver is made possible with the proposed slicer by having rail-to-rail digital feedback signals available with reduced delay, and accordingly relaxing the settling time constraint of the summer. With the 2-tap direct DFE enabled by the proposed slicer, loop-unrolling and inductor-based bandwidth enhancement techniques, which can be area/power intensive, are not necessary at high data rates. The PAM4 receiver fabricated in 28-nm CMOS technology achieves bit-error-rate (BER) better than 1E-12, and energy efficiency of 1.1 pJ/b at 60 Gb/s, measured over a channel with 8.2-dB loss at Nyquist.
A 40-to-56 Gb/s PAM-4 Receiver With Ten-Tap Direct Decision-Feedback Equalization in 16-nm FinFET. A 40-56 Gb/s PAM-4 receiver with ten-tap decision-feedback equalization (DFE) targeting chip-to-module and board-to-board cable interconnects is designed in 16-nm FinFET. The design implements direct feedback of the first post-cursor (h1) DFE tap to reduce the number of slicers. The h1 feedback signals are directly tapped from the master latch output of the StrongArm-based slicers. A CMOS amplifie...
A 64 Gb/s Low-Power Transceiver for Short-Reach PAM-4 Electrical Links in 28-nm FDSOI CMOS A four-level pulse-amplitude modulation (PAM-4) transceiver operating up to 64 Gb/s in 28-nm CMOS fully depleted silicon-on-insulator (FDSOI) for short-reach electrical links is presented. The receiver equalization relies on a flexible continuous-time linear equalizer (CTLE), providing a very accurate channel inversion through a transfer function that can be optimally adapted at low frequency, mid-frequency, and high frequency independently. The CTLE meets the performance requirements of CEI-56G-VSR without requiring the decision feedback equalizer (DFE) implementation. As a result, timing constraints for comparators in data and edge sampling paths may be relaxed by using track-and-hold (T&H) stages, saving power consumption. At the maximum speed, the receiver draws 180 mA from 1-V supply, corresponding to 2.8 mW/Gb/s only. The transmitter embeds a flexible feed-forward equalizer (FFE) which can be reconfigured to comply with legacy standards. A comparison between current-mode (CM) and voltage-mode (VM) TX drivers is proposed, proving through experiments that the latter yields larger PAM-4 eye openings, thanks to the intrinsically higher speed. The full transceiver (TX, RX, and clock generation) operates from 16 to 64 Gb/s in PAM-4 and 8 to 32 Gb/s in non-return-to-zero (NRZ), and supports 2 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> and 4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> oversampling to reduce data rate down to 2 Gb/s. A TX-to-RX link at 64 Gb/s, across a 16.8-dB-loss channel, reaches 10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">−12</sup> minimum bit-error rate (BER) and 0.19-UI horizontal eye opening at BER = 10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">−6</sup> , with 5.02 mW/Gb/s power dissipation.
Bandwidth extension in CMOS with optimized on-chip inductors We present a technique for enhancing the bandwidth of gigahertz broad-band circuitry by using optimized on-chip spiral inductors as shunt-peaking elements. The series resistance of the on-chip inductor is incorporated as part of the load resis- tance to permit a large inductance to be realized with minimum area and capacitance. Simple, accurate inductance expressions are used in a lumped circuit inductor model to allow the passive and active components in the circuit to be simultaneously optimized. A quick and efficient global optimization method, based on geometric programming, is discussed. The bandwidth extension technique is applied in the implementation of a 2.125-Gbaud preamplifier that employs a common-gate input stage followed by a cascoded common-source stage. On-chip shunt peaking is introduced at the dominant pole to improve the overall system performance, including a 40% increase in the transimpedance. This implementation achieves a 1.6-k transimpedance and a 0.6- A input-referred current noise, while operating with a photodiode capacitance of 0.6 pF. A fully differential topology ensures good substrate and supply noise immunity. The amplifier, implemented in a triple-metal, single-poly, 14-GHz , 0.5- m CMOS process, dissipates 225 mW, of which 110 mW is consumed by the 50- output driver stage. The optimized on-chip inductors consume only 15% of the total area of 0.6 mm . in the 1-2-GHz range. This paper discusses how optimized on-chip inductors can be used to enhance the bandwidth of broad-band amplifiers and thereby push the performance limits of CMOS implementations. An attractive feature of this technique is that the bandwidth enhancement comes with no additional power dissipation. This bandwidth enhancement is achieved by shunt peaking, a method first used in the 1940's to extend the bandwidth of television tubes. Section II describes the fundamentals of this approach. Section III focuses on how shunt-peaked amplifiers can be implemented in the integrated circuit environment. A well-accepted lumped circuit model for a spiral inductor is used along with recently developed inductance expressions to allow the inductor modeling to be performed in a standard circuit de- sign environment such as SPICE. This approach circumvents the inconvenient, iterative interface between an inductor simulator and a circuit design tool. Most important, a new design method- ology is described that yields a large inductance in a small die area. The new method is implemented using a simple and efficient circuit design computer-aided design tool described in Section IV. This tool is based on geometric programming (GP), a spe- cial type of optimization problem for which very efficient global optimization methods have been developed. An attractive fea- ture of this technique is that it enables the designer to optimize passive and active devices simultaneously. This feature allows a shunt-peaked amplifier with on-chip inductors to be optimized directly from specifications. Sections V and VI illustrate how shunt peaking is used to improve the performance of a transimpedance preamplifier. A prototype preamplifier, intended for gigabit optical communi- cation systems, is implemented in a 0.5- m CMOS process. The use of on-chip shunt peaking permits a 40% increase in the transimpedance with no additional power dissipation. The op- timized on-chip inductors only consume 15% of the total chip area.
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
Broadband MIMO-OFDM Wireless Communications Orthogonal frequency division multiplexing (OFDM) is a popular method for high data rate wireless transmission. OFDM may be combined with antenna arrays at the transmitter and receiver to increase the diversity gain and/or to enhance the system capacity on time-varying and frequency-selective channels, resulting in a multiple-input multiple-output (MIMO) configuration. The paper explores various p...
Supporting Aggregate Queries Over Ad-Hoc Wireless Sensor Networks We show how the database community's notion of a generic query interface for data aggregation can be applied to ad-hoc networks of sensor devices. As has been noted in the sensor network literature, aggregation is important as a data reduction tool; networking approaches, however, have focused on application specific solutions, whereas our in-network aggregation approach is driven by a general purpose, SQL-style interface that can execute queries over any type of sensor data while providing opportunities for significant optimization. We present a variety of techniques to improve the reliability and performance of our solution. We also show how grouped aggregates can be efficiently computed and offer a comparison to related systems and database projects.
Exploiting ILP, TLP, and DLP with the polymorphous TRIPS architecture This paper describes the polymorphous TRIPS architecture which can be configured for different granularities and types of parallelism. TRIPS contains mechanisms that enable the processing cores and the on-chip memory system to be configured and combined in different modes for instruction, data, or thread-level parallelism. To adapt to small and large-grain concurrency, the TRIPS architecture contains four out-of-order, 16-wide-issue Grid Processor cores, which can be partitioned when easily extractable fine-grained parallelism exists. This approach to polymorphism provides better performance across a wide range of application types than an approach in which many small processors are aggregated to run workloads with irregular parallelism. Our results show that high performance can be obtained in each of the three modes--ILP, TLP, and DLP-demonstrating the viability of the polymorphous coarse-grained approach for future microprocessors.
RockSalt: better, faster, stronger SFI for the x86 Software-based fault isolation (SFI), as used in Google's Native Client (NaCl), relies upon a conceptually simple machine-code analysis to enforce a security policy. But for complicated architectures such as the x86, it is all too easy to get the details of the analysis wrong. We have built a new checker that is smaller, faster, and has a much reduced trusted computing base when compared to Google's original analysis. The key to our approach is automatically generating the bulk of the analysis from a declarative description which we relate to a formal model of a subset of the x86 instruction set architecture. The x86 model, developed in Coq, is of independent interest and should be usable for a wide range of machine-level verification tasks.
Sensor network gossiping or how to break the broadcast lower bound Gossiping is an important problem in Radio Networks that has been well studied, leading to many important results. Due to strong resouce limitations of sensor nodes, previous solutions are frequently not feasible in Sensor Networks. In this paper, we study the gossiping problem in the restrictive context of Sensor Networks. By exploiting the geometry of sensor node distributions, we present reduced, optimal running time of O(D + Δ) for an algorithm that completes gossiping with high probability in a Sensor Network of unknown topology and adversarial wake-up, where D is the diameter and Δ the maximum degree of the network. Given that an algorithm for gossiping also solves the broadcast problem, our result proves that the classic lower bound of [16] can be broken if nodes are allowed to do preprocessing.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.103333
0.1
0.1
0.05
0.033333
0.001151
0
0
0
0
0
0
0
0
A Lidar Receiver for Speed Measurement Based on Software Radio. Against characteristics of large bandwidth and low SNR belong to the echo signal. and in the application of software radio idea, a wide-band receiver scheme for continuous-wave lidar by combining SW Dual Conversion with IF digital quadrature demodulation structure is presented and realized. The effect of sample rate deviation to velocity accuracy is analyzed, and the compensation measure is raised. Simulation experiments indicate that the demodulation of echo signal to a unified IF signal by SW Dual Conversion solves the difficult problem of direct sampling to wide-band signal due to limited sampling rate, and it has excellent frequency selection. Also, compared with traditional approach and analog quadrature detection mode, the performance has been significantly improved. By using frequency offset correction. Doppler frequency error caused by sampling rate deviations can be eliminated.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Spurious tones in digital delta-sigma modulators resulting from pseudorandom dither Digital delta-sigma modulators (DDSMs) are finite state machines; their spectra are characterized by strong periodic tones (so-called spurs) when they cycle repeatedly in time through a small number of states. This happens when the input is constant or periodic. Pseudorandom dither generators are widely used to break up periodic cycles in DDSMs in order to eliminate spurs produced by underlying periodic behavior. Unfortunately, pseudorandom dither signals are themselves periodic and therefore can have limited effectiveness. This paper addresses the fundamental limitations of using pseudorandom dither signals that are inherently periodic. We clarify some common misunderstandings in the DDSM literature. We present rigorous mathematical analysis, case studies to illustrate the issues, and insights which can prove useful in design.
Prediction of the Spectrum of a Digital Delta–Sigma Modulator Followed by a Polynomial Nonlinearity This paper presents a mathematical analysis of the power spectral density of the output of a nonlinear block driven by a digital delta-sigma modulator. The nonlinearity is a memoryless third-order polynomial with real coefficients. The analysis yields expressions that predict the noise floor caused by the nonlinearity when the input is constant.
Masked Dithering of MASH Digital Delta-Sigma Modulators with Constant Inputs Using Linear Feedback Shift Registers. Digital delta-sigma modulators (DDSMs) are finite state machines; their spectra are characterized by strong periodic tones (so-called spurs) when they cycle repeatedly in time through a small number of states. This is particularly likely to happen when the input is constant or periodic. Dither generators based on linear feedback shift registers (LFSRs) are widely used to break up periodic cycles i...
Digital PLLs: the modern timing reference for radar and communication systems Digital PLLs are nowadays recognized as a viable approach for the design of high-performance frequency synthesizers in scaled CMOS technologies. Latest implementations allow achieving at low power both state-of-the-art rms jitter, between 50fs and 100fs, and highly linear fast frequency modulation capability, thus enabling both high-efficiency communications systems and radar applications in CMOS....
On the Mechanisms Governing Spurious Tone Injection in Fractional PLLs. In fractional phase-locked loop driven by ΣΔ modulators there can be spurious tones in the power spectral density (PSD) of output signals even if the PSDs of the sequences used to drive the frequency divider are spur-free. This is due to undesirable nonlinear effects notably occurring in the charge pump (CP). In this brief, we focus on static and dynamic mismatch of the CP and its interaction with...
Efficient dithering in MASH sigma-delta modulators for fractional frequency synthesizers The digital multistage-noise-shaping (MASH) ΣΔ modulators used in fractional frequency synthesizers are prone to spur tone generation in their output spectrum. In this paper, the state of the art on spur-tone-magnitude reduction is used to demonstrate that an M-bit MASH architecture dithered by a simple M-bit linear feedback shift register (LFSR) can be as effective as more sophisticated topologies if the dither signal is properly added. A comparison between the existent digital ΣΔ modulators used in fractional synthesizers is presented to demonstrate that the MASH architecture has the best tradeoff between complexity and quantization noise shaping, but they present spur tones. The objective of this paper was to significantly decrease the area of the circuit used to reduce the spur tone magnitude for these MASH topologies. The analysis is validated with a theoretical study of the paths where the dither signal can be added. Experimental results of a digital M-bit MASH 1-1-1 ΣΔ modulator with the proposed way to add the LFSR dither are presented to make a hardware comparison.
Enhanced phase noise modeling of fractional-N frequency synthesizers. Mathematical models for the behavior of fractional-N phase-locked-loop frequency synthesizers (Frac-N) are presented. The models are intended for calculating rms phase error and determining spurs in the output of Frac-N. The models describe noise contributions due to the charge pump (CP), the phase frequency detector (PFD), the loop filter, the voltage control oscillator, and the delta-sigma modul...
Why systolic architectures? First Page of the Article
Gossip-Based Computation of Aggregate Information Over the last decade, we have seen a revolution in connectivity between computers, and a resulting paradigm shift from centralized to highly distributed systems. With massive scale also comes massive instability, as node and link failures become the norm rather than the exception. For such highly volatile systems, decentralized gossip-based protocols are emerging as an approach to maintaining simplicity and scalability while achieving fault-tolerant information dissemination.In this paper, we study the problem of computing aggregates with gossip-style protocols. Our first contribution is an analysis of simple gossip-based protocols for the computations of sums, averages, random samples, quantiles, and other aggregate functions, and we show that our protocols converge exponentially fast to the true answer when using uniform gossip.Our second contribution is the definition of a precise notion of the speed with which a node's data diffuses through the network. We show that this diffusion speed is at the heart of the approximation guarantees for all of the above problems. We analyze the diffusion speed of uniform gossip in the presence of node and link failures, as well as for flooding-based mechanisms. The latter expose interesting connections to random walks on graphs.
Filtering by Aliasing In this manuscript we describe a fundamentally novel approach to the design of anti-aliasing filters. The approach, termed Filtering by Aliasing, incorporates the frequency-domain aliasing operation itself into the filtering task. The spectral content is spread with a periodic mixer and weighted with a simple analog filter before it aliases at the sampler. By designing the system according to the formulations presented in this manuscript, the sampled output will have been subjected to sharp, highly programmable anti-alias filtering. This manuscript describes the proposed Filtering by Aliasing idea, the effective programmable anti-aliasing filter, its design, and its range of frequency responses. The manuscript also addresses the implementation sensitivities of the proposed Filtering by Aliasing approach and provides a performance comparison against existing techniques in the context of reconfigurable anti-alias filtering.
A highly efficient domain-programmable parallel architecture for iterative LDPCC decoding We present a domain-programmable (code-independent) parallel architecture for efficiently implementing iterative probabilistic decoding of LDPC codes. The architecture is based on distributed computing and message passing. The exploited parallelism was found to be communication limited. To increase the utilization of the computational resources, we separate the routing process and state management functionalities performed by physical nodes from computation functionalities performed by function units that can be shared by multiple physical nodes. Simulation results show that the proposed architecture leads to improvements in FU utilization by 251%, 116%, and 209% compared to a hypothetical fully parallel custom implementation, a fully sequential implementation, and a proprietary FPGA custom implementation, respectively, that all use the same core FU design. Compared to an implementation on a shared-memory general-purpose parallel machine, the proposed architecture exhibits 75.6% improvement in scalability. We also introduce a novel low cost store-and-forward routing algorithm for deadlock avoidance in torus networks
An Identity Authentication Mechanism Based on Timing Covert Channel In the identity authentication, many advanced encryption techniques are applied to confirm and protect the user identity. Although the identity information is transmitted as cipher text in the Internet, the attackers can theft and fraud the identity by eavesdropping, cryptanalysis and forging. In this paper, a new identity authentication mechanism is proposed, which exploits the Timing Covert Channel (TCC) to transmit the identity information. TCC was originally a hacker technique to leak information under supervising, which uses the sending time of packets to indicate the information. In our method, the intervals between packets are applied to indicate the authentication tags. It is difficult for the attackers to eavesdrop, crack and forge the TCC identity, since the packets are too huge to analyze and the noise is different between the users and the attackers. A platform is designed to verify our proposed method. The experiment shows that the intervals and the thresholds are the key factors on the accuracy and efficiency. And it also proves our method is a secure way for identity information, which could be implanted on various network applications.
Current-mode adaptively hysteretic control for buck converters with fast transient response and improved output regulation This paper presents a current-mode adaptively hysteretic control (CMAHC) technique to achieve the fast transient response for DC-DC buck converters. A complementary full range current sensor comprising of chargingpath and discharging-path sensing transistors is proposed to track the inductor current seamlessly. With the proposed current-mode adaptively hysteretic topology, the inductor current is continuously monitored, and the adaptively hysteretic threshold is dynamically adjusted according to the feedback information comes from the output voltage level. Therefore, a fast load-transient response can be achieved. Besides, the output regulation performance is also improved by the proposed dynamic current-scaling circuitry (DCSC). Moreover, the proposed CMAHC topology can be used in a nearly zero RESR design configuration. The prototype fabricated using TSMC 0.25μm CMOS process occupies the area of 1.78mm2 including all bonding pads. Experimental results show that the output voltage ripple is smaller than 30mV over a wide loading current from 0 mA to 500 mA with maximum power conversion efficiency higher than 90%. The recovery time from light to heavy load (100 to 500 mA) is smaller than 5μs.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.11
0.1
0.1
0.1
0.05
0.036667
0.0025
0
0
0
0
0
0
0
A Comprehensive Method for Reachability Analysis of Uncertain Nonlinear Hybrid Systems. Reachability analysis of nonlinear uncertain hybrid systems, i.e., continuous-discrete dynamical systems whose continuous dynamics, guard sets and reset functions are defined by nonlinear functions, can be decomposed in three algorithmic steps: computing the reachable set when the system is in a given operation mode, computing the discrete transitions, i.e., detecting and localizing when (and where) the continuous flowpipe intersects the guard sets, and aggregating the multiple trajectories that result from an uncertain transition once the whole flow-pipe has transitioned so that the algorithm can resume. This paper proposes a comprehensive method that provides a nicely integrated solution to the hybrid reachability problem. At the core of the method is the concept of MSPB, i.e., geometrical object obtained as the Minkowski sum of a parallelotope and an axes aligned box. MSPB are a way to control the over-approximation of the Taylor's interval integration method. As they happen to be a specific type of zonotope, they articulate perfectly with the zonotope bounding method that we propose to enclose in an optimal way the set of flowpipe trajectories generated by the transition process. The method is evaluated both theoretically by analyzing its complexity and empirically by applying it to well-chosen hybrid nonlinear examples.
On global identifiability for arbitrary model parametrizations It is a fundamental problem of identification to be able—even before the data have been analyzed—to decide if all the free parameters of a model structure can be uniquely recovered from data. This is the issue of global identifiability. In this contribution we show how global identifiability for an arbitrary model structure (basically with analytic non-linearities) can be analyzed using concepts and algorithms from differential algebra. It is shown how the question of global structural identifiability is reduced to the question of whether the given model structure can be rearranged as a linear regression. An explicit algorithm to test this is also given. Furthermore, the question of ‘persistent excitation’ for the input can also be tested explicitly is a similar fashion. The algorithms involved are very well suited for implementation in computer algebra. One such implementation is also described.
On location observability notions for switching systems. The focus of this paper is on the analysis of initial discrete state distinguishability notions for switching systems, in a discrete time setting. Moreover, the relationship between initial discrete state distinguishability and the problem of reconstructing the current discrete state is addressed.
An effective method to interval observer design for time-varying systems. An interval observer for Linear Time-Varying (LTV) systems is proposed in this paper. Usually, the design of such observers is based on monotone systems theory. Monotone properties are hard to satisfy in many situations. To overcome this issue, in a recent work, it has been shown that under some restrictive conditions, the cooperativity of an LTV system can be ensured by a static linear transformation of coordinates. However, a constructive method for the construction of the transformation matrix and the observer gain, making the observation error dynamics positive and stable, is still missing and remains an open problem. In this paper, a constructive approach to obtain a time-varying change of coordinates, ensuring the cooperativity of the observer error in the new coordinates, is provided. The efficiency of the proposed approach is shown through computer simulations.
Set-theoretic estimation of hybrid system configurations. Hybrid systems serve as a powerful modeling paradigm for representing complex continuous controlled systems that exhibit discrete switches in their dynamics. The system and the models of the system are nondeterministic due to operation in uncertain environment. Bayesian belief update approaches to stochastic hybrid system state estimation face a blow up in the number of state estimates. Therefore, most popular techniques try to maintain an approximation of the true belief state by either sampling or maintaining a limited number of trajectories. These limitations can be avoided by using bounded intervals to represent the state uncertainty. This alternative leads to splitting the continuous state space into a finite set of possibly overlapping geometrical regions that together with the system modes form configurations of the hybrid system. As a consequence, the true system state can be captured by a finite number of hybrid configurations. A set of dedicated algorithms that can efficiently compute these configurations is detailed. Results are presented on two systems of the hybrid system literature.
Conflict resolution for air traffic management: a study in multiagent hybrid systems Air traffic management (ATM) of the future allows for the possibility of free flight, in which aircraft choose their own optimal routes, altitudes, and velocities. The safe resolution of trajectory conflicts between aircraft is necessary to the success of such a distributed control system. In this paper, we present a method to synthesize provably safe conflict resolution manoeuvres. The method models the aircraft and the manoeuvre as a hybrid control system and calculates the maximal set of safe initial conditions for each aircraft so that separation is assured in the presence of uncertainties in the actions of the other aircraft. Examples of manoeuvres using both speed and heading changes are worked out in detail
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Chains of recurrences—a method to expedite the evaluation of closed-form functions Chains of Recurrences (CR's) are introduced as an effective method to evaluate functions at regular intervals. Algebraic properties of CR's are examined and an algorithm that constructs a CR for a given function is explained. Finally, an implementation of the method in MAXIMA/Common Lisp is discussed.
The PARSEC benchmark suite: characterization and architectural implications This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones Today’s smartphone operating systems frequently fail to provide users with visibility into how third-party applications collect and share their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid enables realtime analysis by leveraging Android’s virtualized execution environment. TaintDroid incurs only 32&percnt; performance overhead on a CPU-bound microbenchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, in our 2010 study we found 20 applications potentially misused users’ private information; so did a similar fraction of the tested applications in our 2012 study. Monitoring the flow of privacy-sensitive data with TaintDroid provides valuable input for smartphone users and security service firms seeking to identify misbehaving applications.
Fuzzy regulators and fuzzy observers: relaxed stability conditions and LMI-based designs This paper presents new relaxed stability conditions and LMI- (linear matrix inequality) based designs for both continuous and discrete fuzzy control systems. They are applied to design problems of fuzzy regulators and fuzzy observers. First, Takagi and Sugeno's fuzzy models and some stability results are recalled. To design fuzzy regulators and fuzzy observers, nonlinear systems are represented by Takagi-Sugeno's (TS) fuzzy models. The concept of parallel distributed compensation is employed to design fuzzy regulators and fuzzy observers from the TS fuzzy models. New stability conditions are obtained by relaxing the stability conditions derived in previous papers, LMI-based design procedures for fuzzy regulators and fuzzy observers are constructed using the parallel distributed compensation and the relaxed stability conditions. Other LMI's with respect to decay rate and constraints on control input and output are also derived and utilized in the design procedures. Design examples for nonlinear systems demonstrate the utility of the relaxed stability conditions and the LMI-based design procedures
CORDIC-based computation of ArcCos and ArcSin CORDIC--based algorithms to compute cos^{-1}(t), sin^{-1}(t) and sqrt{1-t^{2}} are proposed. The implementation requires a standard CORDIC module plus a module to compute the direction of rotation, this being the same hardware required for the extended CORDIC vectoring, recently proposed by the authors. Although these functions can be obtained as a special case of this extended vectoring, the specific algorithm we propose here presents two significant improvements: (1) it achieves an angle granularity of 2^{-n} using the same datapath width as the standard CORDIC algorithm (about n bits, instead of about 2n which would be required using the extended vectoring), and (2) no repetitions of iterations are needed. The proposed algorithm is compatible with the extended vectoring and, in contrast with previous implementations, the number of iterations and the delay of each iteration are the same as for the conventional CORDIC algorithm.
Understanding contention-based channels and using them for defense Microarchitectural resources such as caches and predictors can be used to leak information across security domains. Significant prior work has demonstrated attacks and defenses for specific types of such microarchitectural side and covert channels. In this paper, we introduce a general mathematical study of microarchitectural channels using information theory. Our conceptual contribution is a simple mathematical abstraction that captures the common characteristics of all microarchitectural channels. We call this the Bucket model and it reveals that microarchitectural channels are fundamentally different from side and covert channels in networking. We then quantify the communication capacity of several microarchitectural covert channels (including channels that rely on performance counters, AES hardware and memory buses) and measure bandwidths across both KVM based heavy-weight virtualization and light-weight operating-system level isolation. We demonstrate channel capacities that are orders of magnitude higher compared to what was previously considered possible. Finally, we introduce a novel way of detecting intelligent adversaries that try to hide while running covert channel eavesdropping attacks. Our method generalizes a prior detection scheme (that modeled static adversaries) by introducing noise that hides the detection process from an intelligent eavesdropper.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.24
0.24
0.24
0.24
0.12
0.017778
0
0
0
0
0
0
0
0
Sparc T4: A Dynamically Threaded Server-on-a-Chip The Sparc T4 is the next generation of Oracle's multicore, multithreaded 64-bit Sparc server processor. It delivers significant performance improvements over its predecessor, the Sparc T3 processor. The authors describe Sparc T4's key features and detail the microarchitecture of the dynamically threaded S3 processor core, which is implemented on Sparc T4.
RHMD: evasion-resilient hardware malware detectors. Hardware Malware Detectors (HMDs) have recently been proposed as a defense against the proliferation of malware. These detectors use low-level features, that can be collected by the hardware performance monitoring units on modern CPUs to detect malware as a computational anomaly. Several aspects of the detector construction have been explored, leading to detectors with high accuracy. In this paper, we explore the question of how well evasive malware can avoid detection by HMDs. We show that existing HMDs can be effectively reverse-engineered and subsequently evaded, allowing malware to hide from detection without substantially slowing it down (which is important for certain types of malware). This result demonstrates that the current generation of HMDs can be easily defeated by evasive malware. Next, we explore how well a detector can evolve if it is exposed to this evasive malware during training. We show that simple detectors, such as logistic regression, cannot detect the evasive malware even with retraining. More sophisticated detectors can be retrained to detect evasive malware, but the retrained detectors can be reverse-engineered and evaded again. To address these limitations, we propose a new type of Resilient HMDs (RHMDs) that stochastically switch between different detectors. These detectors can be shown to be provably more difficult to reverse engineer based on resent results in probably approximately correct (PAC) learnability theory. We show that indeed such detectors are resilient to both reverse engineering and evasion, and that the resilience increases with the number and diversity of the individual detectors. Our results demonstrate that these HMDs offer effective defense against evasive malware at low additional complexity.
Post-silicon CPU adaptation made practical using machine learning Processors that adapt architecture to workloads at runtime promise compelling performance per watt (PPW) gains, offering one way to mitigate diminishing returns from pipeline scaling. State-of-the-art adaptive CPUs deploy machine learning (ML) models on-chip to optimize hardware by recognizing workload patterns in event counter data. However, despite breakthrough PPW gains, such designs are not yet widely adopted due to the potential for systematic adaptation errors in the field. This paper presents an adaptive CPU based on Intel SkyLake that (1) closes the loop to deployment, and (2) provides a novel mechanism for post-silicon customization. Our CPU performs predictive cluster gating, dynamically setting the issue width of a clustered architecture while clock-gating unused resources. Gating decisions are driven by ML adaptation models that execute on an existing microcontroller, minimizing design complexity and allowing performance characteristics to be adjusted with the ease of a firmware update. Crucially, we show that although adaptation models can suffer from statistical blindspots that risk degrading performance on new workloads, these can be reduced to minimal impact with careful design and training. Our adaptive CPU improves PPW by 31.4% over a comparable non-adaptive CPU on SPEC2017, and exhibits two orders of magnitude fewer Service Level Agreement (SLA) violations than the state-of-the-art. We show how to optimize PPW using models trained to different SLAs or to specific applications, e.g. to improve datacenter hardware in situ. The resulting CPU meets real world deployment criteria for the first time and provides a new means to tailor hardware to individual customers, even as their needs change.
FaCT: A Flexible, Constant-Time Programming Language We argue that C is unsuitable for writing timing-channel free cryptographic code that is both fast and readable. Readable implementations of crypto routines would contain highlevel constructs like if statements, constructs that also introduce timing vulnerabilities. To avoid vulnerabilities, programmers must rewrite their code to dodge intuitive yet dangerous constructs, cluttering the code-base and potentially introducing new errors. Moreover, even when programmers are diligent, compiler optimization passes may still introduce branches and other sources of timing side channels. This status quo is the worst of both worlds: tortured source code that can still yield vulnerable machine code. We propose to solve this problem with a domain-specific language that permits programmers to intuitively express crypto routines and reason about secret values, and a compiler that generates efficient, timing-channel free assembly code.
MicroScope: Enabling Microarchitectural Replay Attacks A microarchitectural replay attack is a novel class of attack where an adversary can denoise nearly arbitrary microarchitectural side channels in a single run of the victim. The idea is to cause the victim to repeatedly replay by inducing pipeline flushes. In this article, we design, implement, and demonstrate our ideas in a framework, called MicroScope, that causes repeated pipeline flushes by in...
Covert Channels through Random Number Generator: Mechanisms, Capacity Estimation and Mitigations. Covert channels present serious security threat because they allow secret communication between two malicious processes even if the system inhibits direct communication. We describe, implement and quantify a new covert channel through shared hardware random number generation (RNG) module that is available on modern processors. We demonstrate that a reliable, high-capacity and low-error covert channel can be created through the RNG module that works across CPU cores and across virtual machines. We quantify the capacity of the RNG channel under different settings and show that transmission rates in the range of 7-200 kbit/s can be achieved depending on a particular system used for transmission, assumptions, and the load level. Finally, we describe challenges in mitigating the RNG channel, and propose several mitigation approaches both in software and hardware.
Port Contention for Fun and Profit Simultaneous Multithreading (SMT) architectures are attractive targets for side-channel enabled attackers, with their inherently broader attack surface that exposes more per physical core microarchitecture components than cross-core attacks. In this work, we explore SMT execution engine sharing as a side-channel leakage source. We target ports to stacks of execution units to create a high-resolution timing side-channel due to port contention, inherently stealthy since it does not depend on the memory subsystem like other cache or TLB based attacks. Implementing our channel on Intel Skylake and Kaby Lake architectures featuring Hyper-Threading, we mount an end-to-end attack that recovers a P-384 private key from an OpenSSL-powered TLS server using a small number of repeated TLS handshake attempts. Furthermore, we show that traces targeting shared libraries, static builds, and SGX enclaves are essentially identical, hence our channel has wide target application.
Practical Timing Side Channel Attacks against Kernel Space ASLR Due to the prevalence of control-flow hijacking attacks, a wide variety of defense methods to protect both user space and kernel space code have been developed in the past years. A few examples that have received widespread adoption include stack canaries, non-executable memory, and Address Space Layout Randomization (ASLR). When implemented correctly (i.e., a given system fully supports these protection methods and no information leak exists), the attack surface is significantly reduced and typical exploitation strategies are severely thwarted. All modern desktop and server operating systems support these techniques and ASLR has also been added to different mobile operating systems recently. In this paper, we study the limitations of kernel space ASLR against a local attacker with restricted privileges. We show that an adversary can implement a generic side channel attack against the memory management system to deduce information about the privileged address space layout. Our approach is based on the intrinsic property that the different caches are shared resources on computer systems. We introduce three implementations of our methodology and show that our attacks are feasible on four different x86-based CPUs (both 32- and 64-bit architectures) and also applicable to virtual machines. As a result, we can successfully circumvent kernel space ASLR on current operating systems. Furthermore, we also discuss mitigation strategies against our attacks, and propose and implement a defense solution with negligible performance overhead.
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
On The Advantages of Tagged Architecture This paper proposes that all data elements in a computer memory be made to be self-identifying by means of a tag. The paper shows that the advantages of the change from the traditional von Neumann machine to tagged architecture are seen in all software areas including programming systems, operating systems, debugging systems, and systems of software instrumentation. It discusses the advantages that accrue to the hardware designer in the implementation and gives examples for large- and small-scale systems. The economic costs of such an implementation for a minicomputer system are examined. The paper concludes that such a machine architecture may well be a suitable replacement for the traditional von Neumann architecture.
Mdvm System Concept, Paging Latency And Round-2 Randomized Leader Election Algorithm In Sg The future trend in the computing paradigm is marked by mobile computing based on mobile-client/server architecture connected by wireless communication network. However, the mobile computing systems have limitations because of the resource-thin mobile clients operating on battery power. The MDVM system allows the mobile clients to utilize memory and CPU resources of Server-Groups (SG) to overcome the resource limitations of clients in order to support the high-end mobile applications such as, m-commerce and virtual organization (VO). In this paper the concept ofMDVM system and the architecture of cellular network containing the SG are discussed. A round-2 randomized distributed algorithm is proposed to elect a unique leader and co-leader of the SG. The algorithm is free from any assumption about network topology, buffer space limitations and is based on dynamically elected coordinators eliminating single point of failure. The algorithm is implemented in distributed system setup and the network-paging latency values of wired and wireless networks are measured experimentally. The experimental results demonstrate that in most cases the algorithm successfully terminates in first round and the possibility of second round execution decreases significantly with the increase in the size of SG (vertical bar N-a vertical bar). The overall message complexity of the algorithm is O(vertical bar N-a vertical bar). The comparative study of network-paging latencies indicates that 3G/4G mobile communication systems would support the realization of MDVM system.
A 2.4GHz sub-harmonically injection-locked PLL with self-calibrated injection timing A low-phase-noise integer-N phase-locked loop (PLL) is attractive in many applications, such as clock generation and analog-to-digital conversion. The sub-harmonically injection-locked technique, sub-sampling technique, and the multiplying delay-locked loop (MDLL) can significantly improve the phase noise of an integer-N PLL. In the sub-harmonically injection-locked technique, to inject a low-frequency reference clock into a high-frequency voltage-controlled oscillator (VCO), the injection timing should be tightly controlled. If the injection timing varies due to process variation, it may cause a large reference spur or even cause the PLL to fails to lock. A sub-harmonically injection-locked PLL (SILPLL) adopts a sub-sampling phase-detector (PD) to automatically align the phase between the injection pulse and a VCO. However, a sub-sampling PD has a small capture range and a low bandwidth. The high-frequency non-linear effects of a sub-sampling PD may degrade the accuracy and limit the maximum speed of a VCO. In addition, a frequency-locked loop is needed for a sub-sampling PD. A delay line is manually adjusted to achieve the correct injection timing. However, the delay line is sensitive to process variations. Thus, the injection timing should be calibrated.
An Ultra-Low Power Fully Integrated Energy Harvester Based on Self-Oscillating Switched-Capacitor Voltage Doubler This paper presents a fully integrated energy harvester that maintains >35% end-to-end efficiency when harvesting from a 0.84 mm 2 solar cell in low light condition of 260 lux, converting 7 nW input power from 250 mV to 4 V. Newly proposed self-oscillating switched-capacitor (SC) DC-DC voltage doublers are cascaded to form a complete harvester, with configurable overall conversion ratio from 9× to 23×. In each voltage doubler, the oscillator is completely internalized within the SC network, eliminating clock generation and level shifting power overheads. A single doubler has >70% measured efficiency across 1 nA to 0.35 mA output current ( >10 5 range) with low idle power consumption of 170 pW. In the harvester, each doubler has independent frequency modulation to maintain its optimum conversion efficiency, enabling optimization of harvester overall conversion efficiency. A leakage-based delay element provides energy-efficient frequency control over a wide range, enabling low idle power consumption and a wide load range with optimum conversion efficiency. The harvester delivers 5 nW-5 μW output power with >40% efficiency and has an idle power consumption 3 nW, in test chip fabricated in 0.18 μm CMOS technology.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.11
0.11
0.11
0.1
0.1
0.033333
0.017037
0.000734
0
0
0
0
0
0
The software radio concept Since early 1980 an exponential blowup of cellular mobile systems has been observed, which has produced, all over the world, the definition of a plethora of analog and digital standards. In 2000 the industrial competition between Asia, Europe, and America promises a very difficult path toward the definition of a unique standard for future mobile systems, although market analyses underline the trading benefits of a common worldwide standard. It is therefore in this field that the software radio concept is emerging as a potential pragmatic solution: a software implementation of the user terminal able to dynamically adapt to the radio environment in which it is, time by time, located. In fact, the term software radio stands for radio functionalities defined by software, meaning the possibility to define by software the typical functionality of a radio interface, usually implemented in TX and RX equipment by dedicated hardware. The presence of the software defining the radio interface necessarily implies the use of DSPs to replace dedicated hardware, to execute, in real time, the necessary software. In this article objectives, advantages, problem areas, and technological challenges of software radio are addressed. In particular, SW radio transceiver architecture, possible SW implementation, and its download are analyzed
SOFTWARE RADIO APPROACH FOR RE-CONFIGURABLE MULTI-STANDARD RADIOS next generation wireless systems will lead to an integration of existing networks, forming a heterogeneous network. Re-configurable systems will be the enabling technology sharing hardware resources for different purposes. This paper will highlight the requirements of a re- configurable multi-standard terminal from the physical-layer point of view. A re -configurable architecture consisting of algorithm domain specific accelerators, allowing autonomous complex digital signal processing without interference from a microprocessor , will be explained. Performance comparison numbers with latest Digital Signal Processors will show the effectiveness of the proposed architecture.
Low complexity flexible filter banks for uniform and non-uniform channelisation in software radios using coefficient decimation A new approach to implement computationally efficient reconfigurable filter banks (FBs) is presented. If the coefficients of a finite impulse response filter are decimated by M, that is, if every Mth coefficient of the filter is kept unchanged and remaining coefficients are replaced by zeros, a multi-band frequency response will be obtained. The frequency response of the decimated filter will have bands with centre frequencies at 2πk/M, where k is an integer ranging from 0 to M-1. If these multi-band frequency responses are subtracted from each other or selectively masked using inherently low complex wide transition-band masking filters, different low-pass, high-pass, band-pass and band-stop frequency bands are obtained. The resulting FB, whose bands- centre frequencies are located at integer multiples of 2π/M, is a low complexity alternative to the well-known uniform discrete Fourier transform FBs (DFTFBs). It is shown that the channeliser based on the proposed FB does not require any DFT for its implementation unlike a DFTFB. It is also shown that the proposed FB is more flexible and easily reconfigurable than the DFTFB. Furthermore, the proposed FB is able to receive channels of multiple standards simultaneously, whereas separate FBs would be required for simultaneous reception of multi-standard channels in a DFTFB-based receiver. This is achieved through a second stage of coefficient decimation. Implementation result shows that the proposed FB offers an area reduction of 41% and improvement in the speed of 50.8% over DFTFBs.
Digital Signal Processing in Radio Receivers and Transmitters The interface between analog and digital signalprocessing paths in radio receivers and transmitters issteadily migrating toward the antenna as engineers learnto combine the unique attributes and capabilities of DSP with those of traditional communicationsystem designs to achieve systems with superior andbroadened capabilities while reducing system cost.Digital signal processing (DSP) techniques are rapidly being applied to many signal conditioning andsignal processing tasks traditionally performed byanalog components and subsystems in RF communicationreceivers and transmitters [1-4]. The incentive toreplace analog implementations of signal processingfunctions with DSP-based processing includes reducedcost, enhanced performance, improved reliability, easeof manufacturing and maintenance, and operatingflexibility and configurability [5]. Technologies thatfacilitate cost-effective DSP-based implementationsinclude a very large market base supportinghigh-performance programmable signal processing chips[6], field programmable gate arrays (FPGA),application-specific integrated circuits (ASICs), andhigh-performance analog-to-digital and digital-to-analogconverters (ADC and DAC respectively) [7]. The optimumpoint for inserting DSP in a signal processing chainis determined by matching the system performancerequirements to bandwidth and signal-to-noise ratio(i.e., speed and precision) limitations of the signal processors and the signal converters. In thispaper we review how clever algorithmic structuresinteract with DSP hardware to extend the range andperformance of DSP-based processing in RF transmitters and receivers.
Introducing software defined radio to 4G wireless: Necessity, advantage, and impediment. This work summarizes the current state of the art in software radio for 4G systems. Specifically, this work demonstrates that classic radio structures, e.g., heterodyne reception, homodyne reception, and their improved implementations, are inadequate selections for multi-mode reception. This opens the door to software defined radio, a novel reception architecture which promises ease in multi-band, multi-protocol design. The work presents the many advantages of such an architecture, including flexibility, reduced cost via component reduction, and improved reliability via, e.g., the elimination of environmental instability. The work also explains the limitations that currently curtail the widespread use of SDR, including issues surrounding A/D converters, management of software and power, and clock generation. This provides direction for future research to enable the broad applicability of SDR in 4G cellular and beyond.
A Low-Power Digit-Based Reconfigurable FIR Filter In this brief, we present a digit-reconfigurable finite-impulse response (FIR) filter architecture with a very fine granularity. It provides a flexible yet compact and low-power solution to FIR filters with a wide range of precision and tap length. Based on the proposed architecture, an 8-digit reconfigurable FIR filter chip is implemented in a single-poly quadruple-metal 0.35-mu m CMOS technology. Measurement results show that the fabricated chip operates up to 86 MHz when the filter draws 16.5 mW of power from a 23-V power supply.
Charge-domain signal processing of direct RF sampling mixer with discrete-time filters in Bluetooth and GSM receivers RF circuits for multi-GHz frequencies have recently migrated to low-cost digital deep-submicron CMOS processes. Unfortunately, this process environment, which is optimized only for digital logic and SRAM memory, is extremely unfriendly for conventional analog and RF designs. We present fundamental techniques recently developed that transform the RF and analog circuit design complexity to digitally intensive domain for a wireless RF transceiver, so that it enjoys benefits of digital and switched-capacitor approaches. Direct RF sampling techniques allow great flexibility in reconfigurable radio design. Digital signal processing concepts are used to help relieve analog design complexity, allowing one to reduce cost and power consumption in a reconfigurable design environment. The ideas presented have been used in Texas Instruments to develop two generations of commercial digital RF processors: a single-chip Bluetooth radio and a single-chip GSM radio. We further present details of the RF receiver front end for a GSM radio realized in a 90-nm digital CMOS technology. The circuit consisting of low-noise amplifier, transconductance amplifier, and switching mixer offers 32.5 dB dynamic range with digitally configurable voltage gain of 40 dB down to 7.5dB. A series of decimation and discrete-time filtering follows the mixer and performs a highly linear second-order lowpass filtering to reject close-in interferers. The front-end gains can be configured with an automatic gain control to select an optimal setting to form a trade-off between noise figure and linearity and to compensate the process and temperature variations. Even under the digital switching activity, noise figure at the 40 dB maximum gain is 1.8 dB and +50 dBm IIP2 at the 34 dB gain. The variation of the input matching versus multiple gains is less than 1 dB. The circuit in total occupies 3.1 mm2 . The LNA, TA, and mixer consume less than 15.3 mA at a supply voltage of 1.4 V.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
Edge Computing: Vision and Challenges. The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this pap...
A review of process fault detection and diagnosis: Part II: Qualitative models and search strategies In this part of the paper, we review qualitative model representations and search strategies used in fault diagnostic systems. Qualitative models are usually developed based on some fundamental understanding of the physics and chemistry of the process. Various forms of qualitative models such as causal models and abstraction hierarchies are discussed. The relative advantages and disadvantages of these representations are highlighted. In terms of search strategies, we broadly classify them as topographic and symptomatic search techniques. Topographic searches perform malfunction analysis using a template of normal operation, whereas, symptomatic searches look for symptoms to direct the search to the fault location. Various forms of topographic and symptomatic search strategies are discussed.
TCP/IP Timing Channels: Theory to Implementation Abstract—There has been significant recent interest in covert communication using timing channels. In network timing chan- nels, information is leaked by controlling the time between trans- missions of consecutive packets. Our work focuses on network timing channels and provides two main contributions. The first is to quantify the threat posed by covert network timing channels. The other is to use timing channels to communicate at a low data rate without being detected. In this paper, we design and implement a covert TCP/IP timing channel. We are able to quantify the achievable data rate (or leak rate) of such a covert channel. Moreover, we show that by sacrificing data rate, the traffic patterns of the covert timing channel can be made computationally indistinguishable from that of normal traffic, which makes detecting such communication virtually impossible. We demonstrate the efficacy of our solution by showing significant performance gains in terms of both data rate and covertness over the state-of-the-art.
The accelerator store: A shared memory framework for accelerator-based systems This paper presents the many-accelerator architecture, a design approach combining the scalability of homogeneous multi-core architectures and system-on-chip's high performance and power-efficient hardware accelerators. In preparation for systems containing tens or hundreds of accelerators, we characterize a diverse pool of accelerators and find each contains significant amounts of SRAM memory (up to 90&percnt; of their area). We take advantage of this discovery and introduce the accelerator store, a scalable architectural component to minimize accelerator area by sharing its memories between accelerators. We evaluate the accelerator store for two applications and find significant system area reductions (30&percnt;) in exchange for small overheads (2&percnt; performance, 0&percnt;--8&percnt; energy). The paper also identifies new research directions enabled by the accelerator store and the many-accelerator architecture.
A 0.5-V 2.5-GHz high-gain low-power regenerative amplifier based on Colpitts oscillator topology in 65-nm CMOS This paper proposes the regenerative amplifier based on the Colpitts oscillator topology. The positive feedback amount was optimized analytically in the circuit design. The proposed regenerative amplifier was fabricated in 65 nm CMOS technology. The measurement results showed 28.7 dB gain and 6.4 dB noise figure at 2.55 GHz while consuming 120 μW under the 0.5-V power supply.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.007994
0.007273
0.005455
0.004364
0.003636
0.000909
0.000053
0
0
0
0
0
0
0
Verification at RTL Using Separation of Design Concerns Design-for-test, logic built-in self-test, memory technology mapping, and clocking concerns require team-months of verification time as they traditionally happen at gate-level. We present a novel concern-oriented methodology that enables automatic insertion of these concerns at the register-transfer-level where verification is easier. The methodology involves three main phases: 1) flipflop inference and instantiation algorithms that handle parametric register transfer level (RTL) modules; 2) transformations that take entry RTL and produce RTL modules where memory elements are separated from functionality; and 3) a concern weaving tool that automatically inserts memory related design concerns implemented in recipe files into the RTL modules. The transformation is sound as proven and validated by equivalence checking using formal verification. We implemented the methodology in a tool that is currently used in an industrial setting wherein it reduced design verification time by more than 40%. The methodology is also effective with open source embedded system frameworks.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
SimpleScalar: An Infrastructure for Computer System Modeling Designers can execute programs on software models to validate a proposed hardware design's performance and correctness, while programmerscan use these models to develop and test software before the real hardwarebecomes available. Three critical requirements drive the implementationof a software model: performance, flexibility, and detail.Performance determines the amount of workload the model can exercise given the machine resources available for simulation. Flexibility indicates how well the model is structured to simplify modification, permitting design variants or even completely different designs to be modeled with ease. Detail defines the level of abstraction used to implement the model's components.The SimpleScalar tool set provides an infrastructure for simulation and architectural modeling. It can model a variety of platforms ranging from simple unpipelined processors to detailed dynamically scheduled microarchitectures with multiple-level memory hierarchies. SimpleScalar simulators reproduce computing device operations by executing all program instructions using an interpreter.The tool set's instruction inter-complex modern machines and effectively manage the large software projects needed to model such machines. Asim addresses these needs by providing a modular and reusable framework for creating many models. The framework's modularity helps break down the performance-modeling problem into individual pieces that can be modeled separately, while its reusability allows using a software component repeatedly in different contexts.
An approach to testing specifications An approach to testing the consistency of specifications is explored, which is applicable to the design validation of communication protocols and other cases of step-wise refinement. In this approach, a testing module compares a trace of interactions obtained from an execution of the refined specification (e. g. the protocol specification) with the reference specification (e. g. the communication service specification). Non-determinism in reference specifications presents certain problems. Using an extended finite state transition model for the specifications, a strategy for limiting the amount of non-determinacy is presented. An automated method for constructing a testing module for a given reference specification is discussed. Experience with the application of this testing approach to the design of a Transport protocol and a distributed mutual exclusion algorithm is described.
Scientific benchmarking of parallel computing systems: twelve ways to tell the masses when reporting performance results Measuring and reporting performance of parallel computers constitutes the basis for scientific advancement of high-performance computing (HPC). Most scientific reports show performance improvements of new techniques and are thus obliged to ensure reproducibility or at least interpretability. Our investigation of a stratified sample of 120 papers across three top conferences in the field shows that the state of the practice is lacking. For example, it is often unclear if reported improvements are deterministic or observed by chance. In addition to distilling best practices from existing work, we propose statistically sound analysis and reporting techniques and simple guidelines for experimental design in parallel computing and codify them in a portable benchmarking library. We aim to improve the standards of reporting research results and initiate a discussion in the HPC field. A wide adoption of our minimal set of rules will lead to better interpretability of performance results and improve the scientific culture in HPC.
OpenFPGA: An Opensource Framework Enabling Rapid Prototyping of Customizable FPGAs Driven by the strong need in data processing applications, Field Programmable Gate Arrays (FPGAs) are playing an ever-increasing role as programmable accelerators in modern computing systems. To fully unlock processing capabilities for domain-specific applications, FPGA architectures have to be tailored for seamless cooperation with other computing resources. However, prototyping and bringing to production a customized FPGA is a costly and complex endeavor even for industrial vendors. In this paper, we introduce OpenFPGA, an opensource framework that enables rapid prototyping of customizable FPGA architectures through a semi-custom design approach. We propose an XML-to-Prototype design flow, where the Verilog netlists of a full FPGA fabric can be autogenerated using an extension of the XML language from the VTR framework and then fed into a back-end flow to generate production-ready layouts. OpenFPGA also includes a general-purpose Verilog-to-Bitstream generator for any FPGA described by the XML language. We demonstrate the capability of this automatic design flow with a Stratix IV-like FPGA architecture using a commercial 40nm technology node, and perform a detailed comparison to its academic and commercial counterparts. Compared to the current state-of-art academic results, our FPGA fabric reduces the area by 1:75 and the delay by 3 on average. In addition, OpenFPGA significantly reduces the gap between semi-custom designed FPGAs and fully-optimized commercial products with a penalty of only 60% in area and 30% in delay, respectively.
Hardware Design with a Scripting Language The Python Hardware Description Language (PyHDL) provides a scripting interface to object-oriented hardware design in C++. PyHDL uses the PamDC and PAM-Blox libraries to generate FPGA circuits. The main advantage of scripting languages is a reduction in development time for high-level designs. We propose a two-step approach: first, use scripting to explore effects of composition and parameterisation; second, convert the scripted designs into compiled components for performance. Our results show that, for small designs, our method offers 5 to 7 times improvement in turnaround time. For a large 10x10 matrix vector multiplier, our method offers respectively 365% and 19% improvement in turnaround time over purely scripting and purely compiled methods.
Randomized algorithms This text by two well-known experts in the field presents the basic concepts in the design and analysis of randomized algorithms at a level accessible to beginning graduate students, professionals and researchers.
Building efficient wireless sensor networks with low-level naming In most distributed systems, naming of nodes for low-level communication leverages topological location (such as node addresses) and is independent of any application. In this paper, we investigate an emerging class of distributed systems where low-level communication does not rely on network topological location. Rather, low-level communication is based on attributes that are external to the network topology and relevant to the application. When combined with dense deployment of nodes, this kind of named data enables in-network processing for data aggregation, collaborative signal processing, and similar problems. These approaches are essential for emerging applications such as sensor networks where resources such as bandwidth and energy are limited. This paper is the first description of the software architecture that supports named data and in-network processing in an operational, multi-application sensor-network. We show that approaches such as in-network aggregation and nested queries can significantly affect network traffic. In one experiment aggregation reduces traffic by up to 42% and nested queries reduce loss rates by 30%. Although aggregation has been previously studied in simulation, this paper demonstrates nested queries as another form of in-network processing, and it presents the first evaluation of these approaches over an operational testbed.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
An artificial neural network (p,d,q) model for timeseries forecasting Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed.
Efficiency of a Regenerative Direct-Drive Electromagnetic Active Suspension. The efficiency and power consumption of a direct-drive electromagnetic active suspension system for automotive applications are investigated. A McPherson suspension system is considered, where the strut consists of a direct-drive brushless tubular permanent-magnet actuator in parallel with a passive spring and damper. This suspension system can both deliver active forces and regenerate power due to imposed movements. A linear quadratic regulator controller is developed for the improvement of comfort and handling (dynamic tire load). The power consumption is simulated as a function of the passive damping in the active suspension system. Finally, measurements are performed on a quarter-car test setup to validate the analysis and simulations.
The real-time segmentation of indoor scene based on RGB-D sensor The vision system of the mobile robot is a low-level function that provides the required target information of the current environment for the upper vision tasks. The real-time performance and robustness of object segmentation in cluttered environments is still a serious problem in robot visions. In this paper, a new real-time indoor scene segmentation method based on RGB-D image, is presented and the extracted primary object regions are then used for object recognition. Firstly, this paper accomplishes the depth filtering by the improved traditional filtering method. Then by using improved depth information, the algorithm extracts the foreground and implements the object segmentation of color image at the resolution of 640×480 from Kinect camera. Finally, the segmentation results are applied into the object recognition in indoor scene to validate the effectiveness of scene segmentation. The results of indoor segmentation demonstrate the real-time performance and robustness of the proposed method. In addition, the results of segmentation improve the accuracy of object recognition and reduce time of object recognition in indoor cluttered scene.
A 0.5 V 10-bit 3 MS/s SAR ADC With Adaptive-Reset Switching Scheme and Near-Threshold Voltage-Optimized Design Technique This brief presents a 10-bit ultra-low power energy-efficient successive approximation register (SAR) analog-to-digital converter (ADC). A new adaptive-reset switching scheme is proposed to reduce the switching energy of the capacitive digital-to-analog converter (CDAC). The proposed adaptive-reset switching scheme reduces the average switching energy of the CDAC by 90% compared to the conventional scheme without the common-mode voltage variation. In addition, the near-threshold voltage (NTV)-optimized digital library is adopted to alleviate the performance degradation in the ultra-low supply voltage while simultaneously increasing the energy efficiency. The NTV-optimized design technique is also introduced to the bootstrapped switch design to improve the linearity of the sample-and-hold circuit. The test chip is fabricated in a 65 nm CMOS, and its core area is 0.022 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . At a supply of 0.5 V and sampling speed of 3 MS/s, the SAR ADC achieves an ENOB of 8.78 bit and consumes <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3.09~{\boldsymbol{\mu }}\text{W}$ </tex-math></inline-formula> . The resultant Walden figure-of-merit (FoM) is 2.34 fJ/conv.-step.
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
A 2.02-5.16 fJ/Conversion Step 10 Bit Hybrid Coarse-Fine SAR ADC With Time-Domain Quantizer in 90 nm CMOS. This paper presents an ultra-low-voltage and power-efficient 10 bit hybrid successive approximation register (SAR) analog-to-digital converter (ADC). For reducing the digital-to-analog converter (DAC) capacitance and comparator requirement, we propose a hybrid architecture comprising a coarse 7 bit SAR ADC and fine 3.5 bit time-to-digital converter (TDC). The Vcm-based switching method is adopted ...
Design Considerations of Ultralow-Voltage Self-Calibrated SAR ADC This brief presents a 0.5-V 11-bit successive approximation register analog-to-digital converter (ADC) with a focus on self-calibration at a low supply voltage. The relationships among the noise of comparators, the resolution of a calibration digitalto-analog converter (DAC), and the overall ADC performance are studied. Analysis shows that the nonlinearity of a calibration DAC and a coupling capacitor has an insignificant effect. An ultralow-leakage switch is also described, and an improved process of measuring mismatch is proposed to alleviate the charge injection of a sampling switch. Fabricated in the 0.13-μm CMOS with an active area of 0.868 mm2, the ADC achieves a signal-to-noise-plus-distortion ratio (SNDR) of 62.12 dB and a spurious-free dynamic range of 73.03 dB at a 500-kS/s sampling rate. The power consumption is 39.9 μW.
A 1-V 9.8-ENOB 100-kS/s single-ended SAR ADC with symmetrical DAC switching technique for neural signal acquisition This paper reports a high-performance low-power and area-efficient single-ended SAR ADC for neural signal acquisition. The proposed 10-bit ADC features a novel symmetrical DAC switching technique that resolves the signal-dependent comparator offset voltage problem in conventional single-ended SAR ADCs, and improves the ADC's ENOB. Combined with an existing LSB single-sided switching method, the proposed switching scheme reduces DAC switching energy by 92% and capacitor array area by 50%. Besides, the proposed ADC also eliminates the need for any power consuming Vcm generation circuit, making it more suitable for low-power System-on-Chip (SoC) integration. The 10-bit prototype ADC is fabricated in a standard 0.18-um CMOS technology. Operating at 1.0 V power supply and 100 kS/s, the proposed ADC achieves 58.83 dB SNDR and 63.6 dB SFDR for a 49.06 kHz input signal. The maximum ENOB is 9.8-bit for low frequency input signal; and the minimum ENOB is 9.48-bit at the Nyquist input frequency. The average power consumption is 1.72 μW and the fig re-of-merit (FoM) is 24.1 fJ/conversion-step.
A 0.003 mm 10 b 240 MS/s 0.7 mW SAR ADC in 28 nm CMOS With Digital Error Correction and Correlated-Reversed Switching This paper describes a single-channel, calibration-free Successive-Approximation-Register (SAR) ADC with a resolution of 10 bits at 240 MS/s. A DAC switching technique and an addition-only digital error correction technique based on the non-binary search are proposed to tackle the static and dynamic non-idealities attributed to capacitor mismatch and insufficient DAC settling. The conversion speed is enhanced, and the power and area of the DAC are also reduced by 40% as a result. In addition, a switching scheme lifting the input common mode of the comparator is proposed to further enhance the speed. Moreover, the comparator employs multiple feedback paths for an enhanced regeneration strength to alleviate the metastable problem. Occupying an active area of 0.003 mm and dissipating 0.68 mW from 1 V supply at 240 MS/s in 28 nm CMOS, the proposed design achieves an SNDR of 57 dB with low-frequency inputs and 53 dB at the Nyquist input. This corresponds to a conversion efficiency of 4.8 fJ/c.-s. and 7.8 fJ/c.-s. respectively. The DAC switching technique improves the INL and DNL from +1.15/-1.01 LSB and +0.92/-0.28 LSB to within +0.55/-0.45 LSB and +0.45/-0.23 LSB, respectively. This ADC is at least 80% smaller and 32% more power efficient than reported state-of-the-art ADCs of similar resolutions and Nyquist bandwidths larger than 75 MHz.
Normalized-Full-Scale-Referencing Digital-Domain Linearity Calibration for SAR ADC. This paper proposes a linearity calibration algorithm of a capacitive digital-to-analog converter (CDAC) for successive approximation register (SAR) analog-to-digital converters (ADCs) based on a normalized-full-scale of the DAC. Since the capacitor weight errors are represented as the difference between the real and ideal weights with respect to the normalized-full-scale, the calibrated digital r...
A Level-Crossing Based QRS-Detection Algorithm for Wearable ECG Sensors In this paper, an asynchronous analog-to-information conversion system is introduced for measuring the RR intervals of the electrocardiogram (ECG) signals. The system contains a modified level-crossing analog-to-digital converter and a novel algorithm for detecting the R-peaks from the level-crossing sampled data in a compressed volume of data. Simulated with MIT-BIH Arrhythmia Database, the proposed system delivers an average detection accuracy of 98.3%, a sensitivity of 98.89%, and a positive prediction of 99.4%. Synthesized in 0.13 μm CMOS technology with a 1.2 V supply voltage, the overall system consumes 622 nW with core area of 0.136 mm (2), which make it suitable for wearable wireless ECG sensors in body-sensor networks.
Analysis of First-Order Anti-Aliasing Integration Sampler Performance of the first-order anti-aliasing integration sampler used in software-defined radio (SDR) receivers is analyzed versus all practical nonidealities. The nonidealities that are considered in this paper are transconductor finite output resistance, switch resistance, nonzero rise and fall times of the sampling clock, charge injection, clock jitter, and noise. It is proved that the filter i...
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
Joint Optimization of Task Scheduling and Image Placement in Fog Computing Supported Software-Defined Embedded System. Traditional standalone embedded system is limited in their functionality, flexibility, and scalability. Fog computing platform, characterized by pushing the cloud services to the network edge, is a promising solution to support and strengthen traditional embedded system. Resource management is always a critical issue to the system performance. In this paper, we consider a fog computing supported software-defined embedded system, where task images lay in the storage server while computations can be conducted on either embedded device or a computation server. It is significant to design an efficient task scheduling and resource management strategy with minimized task completion time for promoting the user experience. To this end, three issues are investigated in this paper: 1) how to balance the workload on a client device and computation servers, i.e., task scheduling, 2) how to place task images on storage servers, i.e., resource management, and 3) how to balance the I/O interrupt requests among the storage servers. They are jointly considered and formulated as a mixed-integer nonlinear programming problem. To deal with its high computation complexity, a computation-efficient solution is proposed based on our formulation and validated by extensive simulation based studies.
Communication-efficient leader election and consensus with limited link synchrony We study the degree of synchrony required to implement the leader election failure detector Ω and to solve consensus in partially synchronous systems. We show that in a system with n processes and up to f process crashes, one can implement Ω and solve consensus provided there exists some (unknown) correct process with f outgoing links that are eventually timely. In the special case where f = 1 , an important case in practice, this implies that to implement Ω and solve consensus it is sufficient to have just one eventually timely link -- all the other links in the system, Θ(n2) of them, may be asynchronous. There is no need to know which link p → q is eventually timely, when it becomes timely, or what is its bound on message delay. Surprisingly, it is not even required that the source p or destination q of this link be correct: either p or q may actually crash, in which case the link p → q is eventually timely in a trivial way, and it is useless for sending messages. We show that these results are in a sense optimal: even if every process has f - 1 eventually timely links, neither Ω nor consensus can be solved. We also give an algorithm that implements Ω in systems where some correct process has f outgoing links that are eventually timely, such that eventually only f links carry messages, and we show that this is optimal. For f = 1 , this algorithm ensures that all the links, except for one, eventually become quiescent.
The Quadrature LC Oscillator: A Complete Portrait Based on Injection Locking We show that the quadrature LC oscillator is best treated as two strongly coupled, nominally identical oscillators that are locked to the same frequency. Differential equations that extend Adler&#39;s description of locking to strong injection reveal the full dynamics of this circuit. With a simplifying insight, the analysis reveals all the modes of the oscillator, their stability, the effects of mism...
Architectural Evolution of Integrated M-Phase High-Q Bandpass Filters -phase bandpass filters (BPFs) are analyzed, and variations of the structure are proposed. For values of that are integer multiples of 4, the conventional -phase BPF structure is modified to take complex baseband impedances and frequency-translate their complex impedance response to the local oscillator frequency. Also, it is demonstrated how the -phase BPF can be modified to implement a high quality factor (Q) image-rejection BPF with quadrature RF inputs. In addition, we present high-Q BPFs whose center frequencies are equal to the sum or difference of the RF and IF (intermediate frequency) clocks. Such filters can be useful in heterodyne receiver architectures.
Implementation of LTE SC-FDMA on the USRP2 software defined radio platform In this paper we discuss the implementation of a Single Carrier Frequency Division Multiple Access (SC-FDMA) transceiver running over the Universal Software Radio Peripheral 2 (USRP2). SC-FDMA is the air interface which has been selected for the uplink in the latest Long Term Evolution (LTE) standard. In this paper we derive an AWGN channel model for SC-FDMA transmission, which is useful for benchmarking experimental results. In our implementation, we deal with signal scaling, equalization and partial synchronization to realize SC-FDMA transmission over a noisy channel at rates up to 5.184 Mbit/s. Experimental results on the Bit Error Rate (BER) versus Signal-to-Noise Ratio (SNR) are presented and compared to theoretical and simulated performance.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.025625
0.025
0.025
0.016667
0.008333
0.0025
0.000833
0
0
0
0
0
0
0
Analysis and design of a multistage CMOS band-pass low-noise preamplifier for ultrawideband RF receiver A CMOS low-noise preamplifier for application in a 3.1-10.6-GHz ultrawideband radio-frequency (RF) receiver system is presented. This is essentially a wideband-pass multistage RF preamplifier using a cascade of a three-segment band-pass LC π-section filter with a common-gate stage as the front end. Fundamental design analysis in terms of gain, bandwidth, noise, and impedance matching for the amplifier is presented in detail. The preamplifier was fabricated using the low-cost TSMC 0.18- m 6M1P CMOS process technology. The amplifier delivered a buffered power gain (S21) of ≈ 14 dB with a -3-dB bandwidth (between the corner frequencies) of around 7.5 GHz. It consumed around 30 mW from a 2.5-V supply voltage. It had a minimum passband noise figure of around 4.7 dB, an input-referred third-order intercept point of -5,3 dBm, and reverse isolation (S12) under -65 dB.
Inductorless Wideband CMOS Low-Noise Amplifiers Using Noise-Canceling Technique Two inductorless wideband low-noise amplifiers (LNAs) fabricated in a 65-nm CMOS process are presented. By using the gain-enhanced noise-canceling technique, the gain at noise-cancelling condition is increased, while the input matching is maintained. The first work is a common-source LNA with resistive shunt feedback. It achieves a maximum power gain of 10.5 dB, a bandwidth of 10 GHz, a noise figure (NF) of 2.7-3.3 dB, and an IIP3 of -3.5 dBm. The power consumption is 13.7 mW from a 1-V supply, and the area is 0.02 mm 2. The second work is a common-gate LNA. It achieves a maximum power gain of 10.7 dB, a bandwidth of 5.2 GHz, a NF of 2.9-5.4 dB, and an IIP3 of -6 dBm. The power consumption is 7 mW from a 1-V supply, and the area is 0.03 mm 2. Experimental results demonstrate that the first LNA shows the largest bandwidth, and the second LNA has the lowest power consumption among the inductorless wideband LNAs.
A Broadband Noise-Canceling CMOS LNA for 3.1–10.6-GHz UWB Receivers An ultra-wideband 3.1-10.6-GHz low-noise amplifier employing a broadband noise-canceling technique is presented. By using the proposed circuit and design methodology, the noise from the matching device is greatly suppressed over the desired UWB band, while the noise from other devices performing noise cancellation is minimized by the systematic approach. Fabricated in a 0.18-mum CMOS process, the ...
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
Differential Power Analysis . Cryptosystem designers frequently assume that secrets willbe manipulated in closed, reliable computing environments. Unfortunately,actual computers and microchips leak information about the operationsthey process. This paper examines specific methods for analyzingpower consumption measurements to find secret keys from tamperresistant devices. We also discuss approaches for building cryptosystemsthat can operate securely in existing hardware that leaks information.Keywords:...
Fog computing and its role in the internet of things Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
Quadratic programming with one negative eigenvalue is NP-hard We show that the problem of minimizing a concave quadratic function with one concave direction is NP-hard. This result can be interpreted as an attempt to understand exactly what makes nonconvex quadratic programming problems hard. Sahni in 1974 [8] showed that quadratic programming with a negative definite quadratic term (n negative eigenvalues) is NP-hard, whereas Kozlov, Tarasov and Hacijan [2] showed in 1979 that the ellipsoid algorithm solves the convex quadratic problem (no negative eigenvalues) in polynomial time. This report shows that even one negative eigenvalue makes the problem NP-hard.
Backwards-compatible array bounds checking for C with very low overhead The problem of enforcing correct usage of array and pointer references in C and C++ programs remains unsolved. The approach proposed by Jones and Kelly (extended by Ruwase and Lam) is the only one we know of that does not require significant manual changes to programs, but it has extremely high overheads of 5x-6x and 11x-12x in the two versions. In this paper, we describe a collection of techniques that dramatically reduce the overhead of this approach, by exploiting a fine-grain partitioning of memory called Automatic Pool Allocation. Together, these techniques bring the average overhead checks down to only 12% for a set of benchmarks (but 69% for one case). We show that the memory partitioning is key to bringing down this overhead. We also show that our technique successfully detects all buffer overrun violations in a test suite modeling reported violations in some important real-world programs.
Phoenix: Detecting and Recovering from Permanent Processor Design Bugs with Programmable Hardware Although processor design verification consumes ever-increasing resources, many design defects still slip into production silicon. In a few cases, such bugs have caused expensive chip recalls. To truly improve productivity, hardware bugs should be handled like system software ones, with vendors periodically releasing patches to fix hardware in the field. Based on an analysis of serious design defects in current AMD, Intel, IBM, and Motorola processors, this paper proposes and evaluates Phoenix -- novel field-programmable on-chip hardware that detects and recovers from design defects. Phoenix taps key logic signals and, based on downloaded defect signatures, combines the signals into conditions that flag defects. On defect detection, Phoenix flushes the pipeline and either retries or invokes a customized recovery handler. Phoenix induces negligible slowdown, while adding only 0.05% area and 0.48% wire overheads. Phoenix detects all the serious defects that are triggered by concurrent control signals. Moreover, it recovers from most of them, and simplifies recovery for the rest. Finally, we present an algorithm to automatically size Phoenix for new processors.
Power saving of a dynamic width controller for a monolithic current-mode CMOS DC-DC converter We propose the dynamic power MOS width controlling technique and the adaptive gate driver voltage technique to find out the better approach to power saving in DC-DC converters. It demonstrates that the dynamic power MOS width controlling technique has much improvement in power consumption than that of the adaptive gate driver voltage technique when the load current is heavy or light. After the dynamic power MOS width modification, the simulation results show that the efficiency of current-mode DC-DC buck converter can be improved from 92% to about 98% in heavy load and from 15% to about 16.3% in light load. However, the adaptive gate driver voltage technique has only little improvement of power saving. It means that the dynamic width controller is the better approach to power saving in the DC-DC converter.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.028571
0.002041
0
0
0
0
0
0
0
0
0
0
0
Measuring Temporal Lags in Delay-Tolerant Networks Delay-tolerant networks (DTNs) are characterized by a possible absence of end-to-end communication routes at any instant. Yet, connectivity can be achieved over time and space, leading to evaluate a given route both in terms of topological length or temporal length. The problem of measuring temporal distances in a social network was recently addressed through postprocessing contact traces like email data sets, in which all contacts are punctual in time (i.e., they have no duration). We focus on the distributed version of this problem and address the more general case that contacts can have arbitrary durations (i.e., be nonpunctual). Precisely, we ask whether each node in a network can track in real time how "out-of-dateâ it is with respect to every other. Although relatively straightforward with punctual contacts, this problem is substantially more complex with arbitrarily long contacts: consecutive hops of an optimal route may either be disconnected (intermittent connectedness of DTNs) or connected (i.e., the presence of links overlaps in time, implying a continuum of path opportunities). The problem is further complicated (and yet, more realistic) by the fact that we address continuous-time systems and nonnegligible message latencies (time to propagate a single message over a single link); however, this latency is assumed fixed and known. We demonstrate the problem is solvable in this general context by generalizing a time-measurement vector clock construct to the case of "nonpunctualâ causality, which results in a tool we call T-Clocks, of independent interest. The remainder of the paper shows how T-Clocks can be leveraged to solve concrete problems such as learning foremost broadcast trees (BTs), network backbones, or fastest broadcast trees in periodic DTNs.
Exploration of Constantly Connected Dynamic Graphs Based on Cactuses. We study the problem of exploration by a mobile entity (agent) of a class of dynamic networks, namely constantly connected dynamic graphs. This problem has already been studied in the case where the agent knows the dynamics of the graph and the underlying graph is a ring of n vertices [5]. In this paper, we consider the same problem and we suppose that the underlying graph is a cactus graph (a connected graph in which any two simple cycles have at most one vertex in common). We propose an algorithm that allows the agent to explore these dynamic graphs in at most 2(O)(root log n)(n) time units. We show that the lower bound of the algorithm is 2(Omega)(root log n)(n) time units.
Efficient routing in carrier-based mobile networks The past years have seen an intense research effort directed at study of delay/disruption tolerant networks and related concepts (intermittently connected networks, opportunistic mobility networks). As a fundamental primitive, routing in such networks has been one of the research foci. While multiple network models have been proposed and routing in them investigated, most of the published results are of heuristic nature with experimental validation; analytical results are scarce and apply mostly to networks whose structure follows deterministic schedule. In this paper, we propose a simple model of opportunistic mobility network based on oblivious carriers, and investigate the routing problem in such networks. We present an optimal online routing algorithm and compare it with a simple shortest-path inspired routing and optimal offline routing. In doing so, we identify the key parameters (the minimum non-zero probability of meeting among the carrier pairs, and the number of carriers a given carrier comes into contact) driving the separation among these algorithms.
Shortest, Fastest, And Foremost Broadcast In Dynamic Networks Highly dynamic networks rarely offer end-to-end connectivity at a given time. Yet, connectivity in these networks can be established over time and space, based on temporal analogues of multi-hop paths (also called journeys). Attempting to optimize the selection of the journeys in these networks naturally leads to the study of three cases: shortest (minimum hop), fastest (minimum duration), and foremost (earliest arrival) journeys. Efficient centralized algorithms exists to compute all cases, when the full knowledge of the network evolution is given.In this paper, we study the distributed counterparts of these problems, i.e. shortest, fastest, and foremost broadcast with termination detection (TDB), with minimal knowledge on the topology. We show that the feasibility of each of these problems requires distinct features on the evolution, through identifying three classes of dynamic graphs wherein the problems become gradually feasible: graphs in which the re-appearance of edges is recurrent (class R.), bounded-recurrent (B), or periodic (p), together with specific knowledge that are respectively n (the number of nodes), Delta (a bound on the recurrence time), and p (the period). In these classes it is not required that all pairs of nodes get in contact only that the overall footprint of the graph is connected over time. Our results, together with the strict inclusion between P, B, and R, implies a feasibility order among the three variants of the problem, i.e. TDB[foremost] requires weaker assumptions on the topology dynamics than TDB[shortest], which itself requires less than TDB[fastest]. Reversely, these differences in feasibility imply that the computational powers of R-n, B-Delta, and P-p also form a strict hierarchy.
Agreement in directed dynamic networks We study the fundamental problem of achieving consensus in a synchronous dynamic network, where an omniscient adversary controls the unidirectional communication links. Its behavior is modeled as a sequence of directed graphs representing the active (i.e. timely) communication links per round. We prove that consensus is impossible under some natural weak connectivity assumptions, and introduce vertex-stable root components as a--practical and not overly strong--means for circumventing this impossibility. Essentially, we assume that there is a short period of time during which an arbitrary part of the network remains strongly connected, while its interconnect topology keeps changing continuously. We present a consensus algorithm that works under this assumption, and prove its correctness. Our algorithm maintains a local estimate of the communication graphs, and applies techniques for detecting stable network properties and univalent system configurations. Our possibility results are complemented by several impossibility results and lower bounds, which reveal that our algorithm is asymptotically optimal.
Graph exploration by a finite automaton A finite automaton, simply referred to as a robot, has to explore a graph whose nodes are unlabeled and whose edge ports are locally labeled at each node. The robot has no a priori knowledge of the topology of the graph or of its size. Its task is to traverse all the edges of the graph. We first show that, for any K-state robot and any d ≥ 3, there exists a planar graph of maximum degree d with at most K + 1 nodes that the robot cannot explore. This bound improves all previous bounds in the literature. More interestingly, we show that, in order to explore all graphs of diameter D and maximum degree d, a robot needs Ω(D log d) memory bits, even if we restrict the exploration to planar graphs. This latter bound is tight. Indeed, a simple DFS up to depth D + 1 enables a robot to explore any graph of diameter D and maximum degree d using a memory of size O(D log d) bits. We thus prove that the worst case space complexity of graph exploration is Θ(D log d) bits.
Time-varying graphs and dynamic networks The past decade has seen intensive research efforts on highly dynamic wireless and mobile networks (variously called delay-tolerant, disruptivetolerant, challenged, opportunistic, etc) whose essential feature is a possible absence of end-to-end communication routes at any instant. As part of these efforts, a number of important concepts have been identified, based on new meanings of distance and connectivity. The main contribution of this paper is to review and integrate the collection of these concepts, formalisms, and related results found in the literature into a unified coherent framework, called TVG (for timevarying graphs). Besides this definitional work, we connect the various assumptions through a hierarchy of classes of TVGs defined with respect to properties with algorithmic significance in distributed computing. One of these classes coincides with the family of dynamic graphs over which population protocols are defined. We examine the (strict) inclusion hierarchy among the classes. The paper also provides a quick review of recent stochastic models for dynamic networks that aim to enable analytical investigation of the dynamics.
Peer counting and sampling in overlay networks: random walk methods In this article we address the problem of counting the number of peers in a peer-to-peer system, and more generally of aggregating statistics of individual peers over the whole system. This functionality is useful in many applications, but hard to achieve when each node has only a limited, local knowledge of the whole system. We propose two generic techniques to solve this problem. The Random Tour method is based on the return time of a continuous time random walk to the node originating the query. The Sample and Collide method is based on counting the number of random samples gathered until a target number of redundant samples are obtained. It is inspired by the "birthday paradox" technique of [6], upon which it improves by achieving a target variance with fewer samples. The latter method relies on a sampling sub-routine which returns randomly chosen peers. Such a sampling algorithm is of independent interest. It can be used, for instance, for neighbour selection by new nodes joining the system. We use a continuous time random walk to obtain such samples. We analyse the complexity and accuracy of the two methods. We illustrate in particular how expansion properties of the overlay affect their performance.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
The M-Machine multicomputer The M-Machine is an experimental multicomputer being developed to test architectural concepts motivated by the constraints of modern semiconductor technology and the demands of programming systems. The M-Machine computing nodes are connected with a 3-D mesh network; each node is a multithreaded processor incorporating 9 function units, on-chip cache, and local memory. The multiple function units are used to exploit both instruction-level and thread-level parallelism. A user accessible message passing system yields fast communication and synchronization between nodes. Rapid access to remote memory is provided transparently to the user with a combination of hardware and software mechanisms. This paper presents the architecture of the M-Machine and describes how its mechanisms attempt to maximize both single thread performance and overall system throughput. The architecture is complete and the MAP chip, which will serve as the M-Machine processing node, is currently being implemented.
Controlling the cost of reliability in peer-to-peer overlays Structured peer-to-peer overlay networks provide a useful substrate for building distributed applications but there are general concerns over the cost of maintaining these overlays. The current approach is to configure the overlays statically and conservatively to achieve the desired reliability even under uncommon adverse conditions. This results in high cost in the common case, or poor reliability in worse than expected conditions. We analyze the cost of overlay maintenance in realistic dynamic environments and design novel techniques to reduce this cost by adapting to the operating conditions. With our techniques, the concerns over the overlay maintenance cost are no longer warranted. Simulations using real traces show that they enable high reliability and performance even in very adverse conditions with low maintenance cost.
Chameleon: a dual-mode 802.11b/Bluetooth receiver system design In this paper, an approach to map the Bluetooth and 802.11b standards specifications into an architecture and specifications for the building blocks of a dual-mode direct conversion receiver is proposed. The design procedure focuses on optimizing the performance in each operating mode while attaining an efficient dual-standard solution. The impact of the expected receiver nonidealities and the characteristics of each building block are evaluated through bit-error-rate simulations. The proposed receiver design is verified through a fully integrated implementation from low-noise amplifier to analog-to-digital converter using IBM 0.25-μm BiCMOS technology. Experimental results from the integrated prototype meet the specifications from both standards and are in good agreement with the target sensitivity.
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.04381
0.04
0.04
0.04
0.026667
0.018139
0.009314
0.000245
0.000022
0
0
0
0
0
GenPIP: In-Memory Acceleration of Genome Analysis via Tight Integration of Basecalling and Read Mapping Nanopore sequencing is a widely-used high-throughput genome sequencing technology that can sequence long fragments of a genome into raw electrical signals at low cost. Nanopore sequencing requires two computationally-costly processing steps for accurate downstream genome analysis. The first step, basecalling, translates the raw electrical signals into nucleotide bases (i.e., A, C, G, T). The second step, read mapping, finds the correct location of a read in a reference genome. In existing genome analysis pipelines, basecalling and read mapping are executed separately. We observe in this work that such separate execution of the two most time-consuming steps inherently leads to ❨1❩ significant data movement and ❨2❩ redundant computations on the data, slowing down the genome analysis pipeline. This paper proposes GenPIP, an in-memory genome analysis accelerator that tightly integrates basecalling and read mapping. GenPIP improves the petformance of the genome analysis pipeline with two key mechanisms: ❨1❩ in-memory fine-grained collaborative execution of the major genome analysis steps in parallel}; ❨2❩ a new technique for early-rejection of low-quality and unmapped reads to timely stop the execution of genome analysis for such reads, reducing inefficient computation. Our experiments show that, for the execution of the genome analysis pipeline, GenPIP provides 41.6$\times$ (8.4$\times$) speedup and 32.8$\times$ (20.8$\times$) energy savings with negligible accuracy loss compared to the state-of-the-art software genome analysis tools executed on a state-of-the-art CPU (GPU). Compared to a design that combines state-of-the-art in-memory basecalling and read mapping accelerators, GenPIP provides 1.39$\times$ speedup and 1.37$\times$ energy savings.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Electromagnetic Full-Wave Simulation of Partial Discharge Detection in High Voltage AC Cables Partial discharge (PD) activity in the insulation system of an electrical equipment can determine the failure of the whole apparatus. PD sensors are widely used in high-voltage electrical systems as the main elements of a detecting system oriented to real time monitoring. Recently, non-invasive sensors have been proposed in industrial applications for cable and other sensitive electrical parts: they are based both on capacitive and on the electromagnetic radiating coupling. In order to assess the real performance of a new sensor produced by a high voltage AC cables manufacturer, the paper proposes electromagnetic fullwave simulation results.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Dual-output capacitive DC-DC converter with power distribution regulator in 90 nm CMOS.
A Wide Load Range and High Efficiency Switched-Capacitor DC-DC Converter With Pseudo-Clock Controlled Load-dependent Frequency A high efficiency 3.3 V-to-1 V switched-capacitor (SC) step-down DC-DC converter with load-dependent frequency control (LFC) and deep-green mode (DGM) operation is proposed for system-on-a-chip (SoC) application. According to output loading current, the LFC technique can immediately and dynamically adjust the switching frequency through the use of pseudo-clock generator (PCG) and lead-lag detector (LLD) circuit to obtain high power conversion efficiency and small output voltage ripple over a wide loading current range. Therefore, adequate loading current supplying function and output voltage regulation can be guaranteed. Moreover, the DGM operation, similar to pulse skipping mode, can mask the switching clock to reduce power loss at ultra-light loads for further improving power efficiency. The test chip fabricated in 55 nm CMOS process demonstrates that the proposed fast transient converter can deliver wide load range from 10 mA to 250 mA with two small flying capacitors (CF1, CF2 = 0.1 μF) and one output capacitor (COUT = 1 μF). The peak conversion efficiency is 89% compared to the ideal value of 91% (3 * VOUT/VIN). In other words, the peak normalized efficiency is equal to 98%. The overall normalized efficiency is always kept higher than 90% while the output voltage ripple is guaranteed smaller than 30 mV.
Digital pulse frequency modulation for switched capacitor DC-DC converter on 65nm process DC-DC converter is one of the most important building blocks in any System-on-Chip (SoC). DC-DC converter has the functional capabilities to supply various voltage levels to various loads of the chip in a way to achieve high power efficiency. Pulse Frequency Modulation is considered as the main control technique for voltage regulation of the Switched Capacitor DC-DC power converter. This paper proposes a design of a digital Pulse Frequency Modulation using Verilog-HDL and verified on 65nm low power process technology. The design includes the generation of the non-overlapping clock by the ring oscillator and the dead time circuit instead of the default clock. PFM has a total power of 7μW, area of 46.4μm2 and a slack time of 0.5ns.
An Efficient Switched-Capacitor DC-DC Buck Converter for Self-Powered Wearable Electronics. This paper introduces an efficient reconfigurable, multiple voltage gain switched-capacitor dc-dc buck converter as part of a power management unit for wearable electronics. The proposed switched-capacitor converter has an input voltage of 0.6 V to 1.2 V generated from an energy harvesting source. The switched-capacitor converter utilizes pulse frequency modulation to generate multiple regulated o...
When hardware is free, power is expensive! Is integrated power management the solution? In the last several years, significant efforts and advances have been made towards the CMOS integration of power converters. In this paper, an overview is given of what might be considered the next step in this domain: AC-DC conversion, efficient high-ratio voltage conversion, wide operating range and energy storage for energy scavenging. The main focus is on CMOS integration as this is the ultimate goal from any system integration point of view. Also, an overview of the state of the art will be discussed.
A Recursive Switched-Capacitor DC-DC Converter Achieving $2^{N}-1$ Ratios With High Efficiency Over a Wide Output Voltage Range. A Recursive Switched-Capacitor (RSC) topology is introduced that enables reconfiguration among 2 N-1 conversion ratios while achieving minimal capacitive charge-sharing loss for a given silicon area. All 2 N-1 ratios are realized by strategically interconnecting N 2:1 SC cells either in series, in parallel, or in a stacked configuration such that the number of input and ground connections are maxi...
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Distributed estimation and quantization An algorithm is developed for the design of a nonlinear, n-sensor, distributed estimation system subject to communication and computation constraints. The algorithm uses only bivariate probability distributions and yields locally optimal estimators that satisfy the required system constraints. It is shown that the algorithm is a generalization of the classical Lloyd-Max results
Distributed average consensus with least-mean-square deviation We consider a stochastic model for distributed average consensus, which arises in applications such as load balancing for parallel processors, distributed coordination of mobile autonomous agents, and network synchronization. In this model, each node updates its local variable with a weighted average of its neighbors' values, and each new value is corrupted by an additive noise with zero mean. The quality of consensus can be measured by the total mean-square deviation of the individual variables from their average, which converges to a steady-state value. We consider the problem of finding the (symmetric) edge weights that result in the least mean-square deviation in steady state. We show that this problem can be cast as a convex optimization problem, so the global solution can be found efficiently. We describe some computational methods for solving this problem, and compare the weights and the mean-square deviations obtained by this method and several other weight design methods.
An area-efficient multistage 3.0- to 8.5-GHz CMOS UWB LNA using tunable active inductors An area-efficient multistage 3.0- to 8.5-GHz ultra-wideband low-noise amplifier (LNA) utilizing tunable active inductors (AIs) is presented. The AI includes a negative impedance circuit (NIC) consisting of a pair of cross-coupled NMOS transistors and is tuned to vary the gain and bandwidth (BW) of the amplifier. Fabricated in a 90-nm digital CMOS process, the proposed fully on-chip LNA occupies a core chip area of only 0.022 mm2. The measurement results show a power gain S21 of 16.0 dB, a noise figure of 3.1-4.4 dB, and an input return loss S11 of less than -10.5 dB over the 3-dB BW of 3.0-8.5 GHz. Tuning the AIs allows one to increase the gain above 18.0 dB and to extend the BW over 9.4 GHz. The LNA consumes 16.0 mW from a power supply of 1.2 V.
Master Data Quality Barriers: An Empirical Investigation Purpose - The development of IT has enabled organizations to collect and store many times more data than they were able to just decades ago. This means that companies are now faced with managing huge amounts of data, which represents new challenges in ensuring high data quality. The purpose of this paper is to identify barriers to obtaining high master data quality.Design/methodology/approach - This paper defines relevant master data quality barriers and investigates their mutual importance through organizing data quality barriers identified in literature into a framework for analysis of data quality. The importance of the different classes of data quality barriers is investigated by a large questionnaire study, including answers from 787 Danish manufacturing companies.Findings - Based on a literature review, the paper identifies 12 master data quality barriers. The relevance and completeness of this classification is investigated by a large questionnaire study, which also clarifies the mutual importance of the defined barriers and the differences in importance in small, medium, and large companies.Research limitations/implications - The defined classification of data quality barriers provides a point of departure for future research by pointing to relevant areas for investigation of data quality problems. The limitations of the study are that it focuses only on manufacturing companies and master data (i.e. not transaction data).Practical implications - The classification of data quality barriers can give companies increased awareness of why they experience data quality problems. In addition, the paper suggests giving primary focus to organizational issues rather than perceiving poor data quality as an IT problem.Originality/value - Compared to extant classifications of data quality barriers, the contribution of this paper represents a more detailed and complete picture of what the barriers are in relation to data quality. Furthermore, the presented classification has been investigated by a large questionnaire study, for which reason it is founded on a more solid empirical basis than existing classifications.
Causality, influence, and computation in possibly disconnected synchronous dynamic networks In this work, we study the propagation of influence and computation in dynamic distributed computing systems that are possibly disconnected at every instant. We focus on a synchronous message-passing communication model with broadcast and bidirectional links. Our network dynamicity assumption is a worst-case dynamicity controlled by an adversary scheduler, which has received much attention recently. We replace the usual (in worst-case dynamic networks) assumption that the network is connected at every instant by minimal temporal connectivity conditions. Our conditions only require that another causal influence occurs within every time window of some given length. Based on this basic idea, we define several novel metrics for capturing the speed of information spreading in a dynamic network. We present several results that correlate these metrics. Moreover, we investigate termination criteria in networks in which an upper bound on any of these metrics is known. We exploit our termination criteria to provide efficient (and optimal in some cases) protocols that solve the fundamental counting and all-to-all token dissemination (or gossip) problems.
16.7 A 20V 8.4W 20MHz four-phase GaN DC-DC converter with fully on-chip dual-SR bootstrapped GaN FET driver achieving 4ns constant propagation delay and 1ns switching rise time Recently, the demand for miniaturized and fast transient response power delivery systems has been growing in high-voltage industrial electronics applications. Gallium Nitride (GaN) FETs showing a superior figure of merit (Rds, ON X Qg) in comparison with silicon FETs [1] can enable both high-frequency and high-efficiency operation in these applications, thus making power converters smaller, faster and more efficient. However, the lack of GaN-compatible high-speed gate drivers is a major impediment to fully take advantage of GaN FET-based power converters. Conventional high-voltage gate drivers usually exhibit propagation delay, tdelay, of up to several 10s of ns in the level shifter (LS), which becomes a critical problem as the switching frequency, fsw, reaches the 10MHz regime. Moreover, the switching slew rate (SR) of driving GaN FETs needs particular care in order to maintain efficient and reliable operation. Driving power GaN FETs with a fast SR results in large switching voltage spikes, risking breakdown of low-Vgs GaN devices, while slow SR leads to long switching rise time, tR, which degrades efficiency and limits fsw. In [2], large tdelay and long tR in the GaN FET driver limit its fsw to 1MHz. A design reported in [3] improves tR to 1.2ns, thereby enabling fsw up to 10MHz. However, the unregulated switching dead time, tDT, then becomes a major limitation to further reduction of tde!ay. This results in limited fsw and narrower range of VIN-VO conversion ratio. Interleaved multiphase topologies can be the most effective way to increase system fsw. However, each extra phase requires a capacitor for bootstrapped (BST) gate driving which incurs additional cost and complexity of the PCB design. Moreover, the requirements of fsw synchronization and balanced - urrent sharing for high fsw operation in multiphase implementation are challenging.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.11
0.12
0.12
0.12
0.033333
0.013333
0
0
0
0
0
0
0
0
A 0.061 nJ/b 10 Mbps Hybrid BF-PSK Receiver for Internet of Things Applications This paper describes a hybrid binary frequency-phase shift keying (BF-PSK) receiver architecture designed with a technique involving both the received signal frequency and phase for low-power operation with relatively high data rate. The method enables the demodulation of the incoming signal without synchronization requirements, which reduces the design complexity and power consumption. The architecture allows programmable data rates and channel bandwidths according to application-specific needs. A novel low-noise amplifier architecture is introduced in this paper as well. The Medical Implant Communication System (MICS) band receiver was designed and fabricated in a standard 65nm CMOS technology, and the measurement results demonstrate the feasibility of this architecture. As a proof-of-concept, it operates with a 416 MHz carrier frequency at a state-of-the-art data rate for sub-milliwatt receivers of 10 Mbps, while consuming <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$610~\mu \text{W}$ </tex-math></inline-formula> from a 1 V supply.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
PyMTL: A Unified Framework for Vertically Integrated Computer Architecture Research Technology trends prompting architects to consider greater heterogeneity and hardware specialization have exposed an increasing need for vertically integrated research methodologies that can effectively assess performance, area, and energy metrics of future architectures. However, constructing such a methodology with existing tools is a significant challenge due to the unique languages, design patterns, and tools used in functional-level (FL), cycle-level (CL), and register-transfer-level (RTL) modeling. We introduce a new framework called PyMTL that aims to close this computer architecture research methodology gap by providing a unified design environment for FL, CL, and RTL modeling. PyMTL leverages the Python programming language to create a highly productive domain-specific embedded language for concurrent-structural modeling and hardware design. While the use of Python as a modeling and framework implementation language provides considerable benefits in terms of productivity, it comes at the cost of significantly longer simulation times. We address this performance-productivity gap with a hybrid JIT compilation and JIT specialization approach. We introduce Sim JIT, a custom JIT specialization engine that automatically generates optimized C++ for CL and RTL models. To reduce the performance impact of the remaining unspecialized code, we combine Sim JIT with an off-the-shelf Python interpreter with a meta-tracing JIT compiler (PyPy). Sim JIT+PyPy provides speedups of up to 72× for CL models and 200× for RTL models, bringing us within 4--6× of optimized C++ code while providing significant benefits in terms of productivity and usability.
Just-In-Time Compilation for Verilog: A New Technique for Improving the FPGA Programming Experience FPGAs offer compelling acceleration opportunities for modern applications. However compilation for FPGAs is painfully slow, potentially requiring hours or longer. We approach this problem with a solution from the software domain: the use of a JIT. Code is executed immediately in a software simulator, and compilation is performed in the background. When finished, the code is moved into hardware, and from the user's perspective it simply gets faster. We have embodied these ideas in Cascade: the first JIT compiler for Verilog. Cascade reduces the time between initiating compilation and running code to less than a second, and enables generic printf debugging from hardware. Cascade preserves program performance to within 3× in a debugging environment, and has minimal effect on a finalized design. Crucially, these properties hold even for programs that perform side effects on connected IO devices. A user study demonstrates the value to experts and non-experts alike: Cascade encourages more frequent compilation, and reduces the time to produce working hardware designs.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Formal verification in hardware design: a survey In recent years, formal methods have emerged as an alternative approach to ensuring the quality and correctness of hardware designs, overcoming some of the limitations of traditional validation techniques such as simulation and testing.There are two main aspects to the application of formal methods in a design process: the formal framework used to specify desired properties of a design and the verification techniques and tools used to reason about the relationship between a specification and a corresponding implementation. We survey a variety of frameworks and techniques proposed in the literature and applied to actual designs. The specification frameworks we describe include temporal logics, predicate logic, abstraction and refinement, as well as containment between &ohgr;-regular languages. The verification techniques presented include model checking, automata-theoretic techniques, automated theorem proving, and approaches that integrate the above methods.In order to provide insight into the scope and limitations of currently available techniques, we present a selection of case studies where formal methods were applied to industrial-scale designs, such as microprocessors, floating-point hardware, protocols, memory subsystems, and communications hardware.
The Oracle Problem in Software Testing: A Survey Testing involves examining the behaviour of a system in order to discover potential faults. Given an input for a system, the challenge of distinguishing the corresponding desired, correct behaviour from potentially incorrect behavior is called the “test oracle problem”. Test oracle automation is important to remove a current bottleneck that inhibits greater overall test automation. Without test or...
BROOM: An Open-Source Out-of-Order Processor With Resilient Low-Voltage Operation in 28-nm CMOS The Berkeley resilient out-of-order machine (BROOM) is a resilient, wide-voltage-range implementation of an open-source out-of-order (OoO) RISC-V processor implemented in an ASIC flow. A 28-nm test-chip contains a BOOM OoO core and a 1-MiB level-2 (L2) cache, enhanced with architectural error tolerance for low-voltage operation. It was implemented by using an agile design methodology, where the initial OoO architecture was transformed to perform well in a high-performance, low-leakage CMOS process, informed by synthesis, place, and route data by using foundry-provided standard-cell library and memory compiler. The two-person-team productivity was improved in part thanks to a number of open-source artifacts: The Chisel hardware construction language, the RISC-V instruction set architecture, the rocket-chip SoC generator, and the open-source BOOM core. The resulting chip, taped out using TSMC’s 28-nm HPM process, runs at 1.0 GHz at 0.9 V, and is able to operate down to 0.47 V.
A Case for Accelerating Software RTL Simulation RTL simulation is a critical tool for hardware design but its current slow speed often bottlenecks the whole design process. Simulation speed becomes even more crucial for agile and open-source hardware design methodologies, because the designers not only want to iterate on designs quicker, but they may also have less resources with which to simulate them. In this article, we execute multiple simulators and analyze them with hardware performance counters. We find some open-source simulators not only outperform a leading commercial simulator, they also achieve comparable or higher instruction throughput on the host processor. Although advanced optimizations may increase the complexity of the simulator, they do not significantly hinder instruction throughput. Our findings make the case that there is significant room to accelerate software simulation and open-source simulators are a great starting point for researchers.
Hidden factors and hidden topics: understanding rating dimensions with review text In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as well as the user's level of interest in wizardry. User feedback is required to discover these latent product and user dimensions. Such feedback often comes in the form of a numeric rating accompanied by review text. However, traditional methods often discard review text, which makes user and product latent dimensions difficult to interpret, since they ignore the very text that justifies a user's rating. In this paper, we aim to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics (such as those learned by topic models like LDA). Our approach has several advantages. Firstly, we obtain highly interpretable textual labels for latent rating dimensions, which helps us to `justify' ratings with text. Secondly, our approach more accurately predicts product ratings by harnessing the information present in review text; this is especially true for new products and users, who may have too few ratings to model their latent factors, yet may still provide substantial information from the text of even a single review. Thirdly, our discovered topics can be used to facilitate other tasks such as automated genre discovery, and to identify useful and representative reviews.
Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines Image processing pipelines combine the challenges of stencil computations and stream programs. They are composed of large graphs of different stencil stages, as well as complex reductions, and stages with global or data-dependent access patterns. Because of their complex structure, the performance difference between a naive implementation of a pipeline and an optimized one is often an order of magnitude. Efficient implementations require optimization of both parallelism and locality, but due to the nature of stencils, there is a fundamental tension between parallelism, locality, and introducing redundant recomputation of shared values. We present a systematic model of the tradeoff space fundamental to stencil pipelines, a schedule representation which describes concrete points in this space for each stage in an image processing pipeline, and an optimizing compiler for the Halide image processing language that synthesizes high performance implementations from a Halide algorithm and a schedule. Combining this compiler with stochastic search over the space of schedules enables terse, composable programs to achieve state-of-the-art performance on a wide range of real image processing pipelines, and across different hardware architectures, including multicores with SIMD, and heterogeneous CPU+GPU execution. From simple Halide programs written in a few hours, we demonstrate performance up to 5x faster than hand-tuned C, intrinsics, and CUDA implementations optimized by experts over weeks or months, for image processing applications beyond the reach of past automatic compilers.
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
Impulse radio: how it works Impulse radio, a form of ultra-wide bandwidth (UWB) spread-spectrum signaling, has properties that make it a viable candidate for short-range communications in dense multipath environments. This letter describes the characteristics of impulse radio using a modulation format that can be sup- ported by currently available impulse signal technology and gives analytical estimates of its multiple-access capability under ideal multiple-access channel conditions.
New OPBHWICAP Interface for Realtime Partial Reconfiguration of FPGA We propose in this paper, a timing analysis of dynamic partial reconfiguration (PR) applied to a NoC (Network on Chip) structure inside a FPGA. In the context of a SDR (Software Defined Radio) example, PR is used to dynamically reconfigure a baseband processing block of a 4G telecommunication chain running in real-time (data rates up to 100 Mbps). The results presented show the validity of our methodology for PR management regarding the timing performances obtained in a real implementation. PR timing is a key point to make SDR approach realistic. These results show that using PR, FPGAs combine the flexibility of SW (software) and the processing power of HW (hardware). This makes PR a tremendous enabling technology for SDR. These results are based on a new IP managing the ICAP component that allows a gain in time of a rate of 124 comparing to the provided OPBHWICAP. Moreover, we have integrated a methodology which can reduce significantly the bitstream size and consequently the reconfiguration duration. The results presented in this paper show that PR reconfiguration time can go downto a few tens of microseconds. This makes PR really attractive for SDR design or any other highly demanding real-time applications.
Characteristics of LNA Operation in Direct Delta–Sigma Receivers This brief analyzes the dual role and operation of the low noise amplifier (LNA) in the recently introduced direct delta-sigma receiver (DDSR). First, the LNA functions as a transconductor in an integrator stage, and in this role, we explore the effects of LNA output impedance on quantization noise shaping by the system. In the second role of a voltage preamplifier, we show how the closed-loop DDSR structure impacts LNA voltage gain and system noise. LNA and system properties are thus intertwined and lead to the need for careful codesign. The reliability of the utilized continuous-time DDSR approximation is verified by simulating a sample receiver model.
A 0.5 V 10-bit 3 MS/s SAR ADC With Adaptive-Reset Switching Scheme and Near-Threshold Voltage-Optimized Design Technique This brief presents a 10-bit ultra-low power energy-efficient successive approximation register (SAR) analog-to-digital converter (ADC). A new adaptive-reset switching scheme is proposed to reduce the switching energy of the capacitive digital-to-analog converter (CDAC). The proposed adaptive-reset switching scheme reduces the average switching energy of the CDAC by 90% compared to the conventional scheme without the common-mode voltage variation. In addition, the near-threshold voltage (NTV)-optimized digital library is adopted to alleviate the performance degradation in the ultra-low supply voltage while simultaneously increasing the energy efficiency. The NTV-optimized design technique is also introduced to the bootstrapped switch design to improve the linearity of the sample-and-hold circuit. The test chip is fabricated in a 65 nm CMOS, and its core area is 0.022 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . At a supply of 0.5 V and sampling speed of 3 MS/s, the SAR ADC achieves an ENOB of 8.78 bit and consumes <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3.09~{\boldsymbol{\mu }}\text{W}$ </tex-math></inline-formula> . The resultant Walden figure-of-merit (FoM) is 2.34 fJ/conv.-step.
1.037601
0.0375
0.033333
0.033333
0.033333
0.033333
0.033333
0.003333
0.000556
0
0
0
0
0
CUDAlign 4.0: Incremental Speculative Traceback for Exact Chromosome-Wide Alignment in GPU Clusters. This paper proposes and evaluates CUDAlign 4.0, a parallel strategy to obtain the optimal alignment of huge DNA sequences in multi-GPU platforms, using the exact Smith–Waterman (SW) algorithm. In the first phase of CUDAlign 4.0, a huge Dynamic Programming (DP) matrix is computed by multiple GPUs, which asynchronously communicate border elements to the right neighbor in order to find the optimal sc...
GenAx: A Genome Sequencing Accelerator. Genomics can transform health-care through precision medicine. Plummeting sequencing costs would soon make genome testing affordable to the masses. Compute efficiency, however, has to improve by orders of magnitude to sequence and analyze the raw genome data. Sequencing software used today can take several hundreds to thousands of CPU hours to align reads to a reference sequence. This paper presents GenAx, an accelerator for read alignment, a time-consuming step in genome sequencing. It consists of a seeding and seed-extension accelerator. The latter is based on an innovative automata design that was designed from the ground-up to enable hardware acceleration. Unlike conventional Levenshtein automata, it is string independent and scales quadratically with edit distance, instead of string length. It supports critical features commonly used in sequencing such as affine gap scoring and traceback. GenAx provides a throughput of 4,058K reads/s for Illumina 101 bp reads. GenAx achieves 31.7x speedup over the standard BWA-MEM sequence aligner running on a 56--thread dualsocket 14-core Xeon E5 server processor, while reducing power consumption by 12 x and area by 5.6 x.
AligneR: A Process-in-Memory Architecture for Short Read Alignment in ReRAMs. Genomics is the key to enable the personal customization of medical care. How to fast and energy-efficiently analyze the huge amounts of genomic sequence data generated by next generation sequencing technologies has become one of the most significant challenges facing genomics today. Existing hardware platforms achieve low genome sequencing throughput with significant hardware and power overhead. ...
SWIFOLD: Smith-Waterman implementation on FPGA with OpenCL for long DNA sequences. The results suggest that SWIFOLD can be a serious contender for accelerating the SW alignment of DNA sequences of unrestricted size in an affordable way reaching on average 125 GCUPS and almost a peak of 270 GCUPS.
Approximate Memristive In-memory Computing. The bottleneck between the processing elements and memory is the biggest issue contributing to the scalability problem in computing. In-memory computation is an alternative approach that combines memory and processor in the same location, and eliminates the potential memory bottlenecks. Associative processors are a promising candidate for in-memory computation, however the existing implementations have been deemed too costly and power hungry. Approximate computing is another promising approach for energy-efficient digital system designs where it sacrifices the accuracy for the sake of energy reduction and speedup in error-resilient applications. In this study, approximate in-memory computing is introduced in memristive associative processors. Two approximate computing methodologies are proposed; bit trimming and memristance scaling. Results show that the proposed methods not only reduce energy consumption of in-memory parallel computing but also improve their performance. As compared to other existing approximate computing methodologies on different architectures (e.g., CPU, GPU, and ASIC), approximate memristive in-memory computing exhibits better results in terms of energy reduction (up to 80x) and speedup (up to 20x) on a variety of benchmarks from different domains when quality degradation is limited to 10% and it confirms that memristive associative processors provide a highly-promising platform for approximate computing.
Seed-and-Vote based In-Memory Accelerator for DNA Read Mapping Genome analysis is becoming more important in the fields of forensic science, medicine, and history. Sequencing technologies such as High Throughput Sequencing (HTS) and Third Generation Sequencing (TGS) have greatly accelerated genome sequencing. However, genome read mapping remains significantly slower than sequencing. Because of the enormous amount of data needed, the speed of the data transfer between the memory and the processing unit limits the execution speed. In-memory computing can help address the memory-bandwidth bottleneck by minimizing data transfers. Ternary Content Addressable Memories (TCAMs) have been used in accelerators because of their fast searching capability for seed-and-extend, a popular read mapping approach. Seed-and-vote, another read mapping approach, is faster than the seed-and-extend approach but has lower accuracies when used with very short reads. Since sequencing technology is moving to longer reads, the seed-and-vote approach is becoming more viable. We propose a genome read mapping accelerator that uses approximate TCAM to execute the Fast Seed and Vote algorithm (FSVA) that can map both short and long reads. We achieved 400X acceleration compared to the seed-and-extend approach BWA-MEM on a CPU and 115X acceleration at 30X energy improvement compared to state-of-the-art in-memory accelerator using the seed-and-extend approach at 98.75% accuracy for 100bp reads.
An End-to-end Oxford Nanopore Basecaller Using Convolution-augmented Transformer Oxford Nanopore sequencing is fastly becoming an active field in genomics, and it’s critical to basecall nucleotide sequences from the complex electrical signals. Many efforts have been devoted to developing new basecalling tools over the years. However, the basecalled reads still suffer from a high error rate and slow speed. Here, we developed an open-source basecalling method, CATCaller, by simultaneously capturing global context through Attention and modeling local dependencies through dynamic convolution. The method was shown to consistently outperform the ONT default basecaller Albacore, Guppy, and a recently developed attention-based method SACall in read accuracy. More importantly, our method is fast through a heterogeneously computational model to integrate both CPUs and GPUs. When compared to SACall, the method is nearly 4 times faster on a single GPU, and is highly scalable in parallelization with a further speedup of 3.3 on a four-GPU node.
Minimap2: pairwise alignment for nucleotide sequences. Motivation: Recent advances in sequencing technologies promise ultra-long reads of similar to 100 kb in average, full-length mRNA or cDNA reads in high throughput and genomic contigs over 100 Mb in length. Existing alignment programs are unable or inefficient to process such data at scale, which presses for the development of new alignment algorithms. Results: Minimap2 is a general-purpose alignment program to map DNA or long mRNA sequences against a large reference database. It works with accurate short reads of >= 100 bp in length, >= 1 kb genomic reads at error rate similar to 15%, full-length noisy Direct RNA or cDNA reads and assembly contigs or closely related full chromosomes of hundreds of megabases in length. Minimap2 does split-read alignment, employs concave gap cost for long insertions and deletions and introduces new heuristics to reduce spurious alignments. It is 3-4 times as fast as mainstream short-read mappers at comparable accuracy, and is >= 30 times faster than longread genomic or cDNA mappers at higher accuracy, surpassing most aligners specialized in one type of alignment.
RowClone: fast and energy-efficient in-DRAM bulk data copy and initialization Several system-level operations trigger bulk data copy or initialization. Even though these bulk data operations do not require any computation, current systems transfer a large quantity of data back and forth on the memory channel to perform such operations. As a result, bulk data operations consume high latency, bandwidth, and energy--degrading both system performance and energy efficiency. In this work, we propose RowClone, a new and simple mechanism to perform bulk copy and initialization completely within DRAM -- eliminating the need to transfer any data over the memory channel to perform such operations. Our key observation is that DRAM can internally and efficiently transfer a large quantity of data (multiple KBs) between a row of DRAM cells and the associated row buffer. Based on this, our primary mechanism can quickly copy an entire row of data from a source row to a destination row by first copying the data from the source row to the row buffer and then from the row buffer to the destination row, via two back-to-back activate commands. This mechanism, which we call the Fast Parallel Mode of RowClone, reduces the latency and energy consumption of a 4KB bulk copy operation by 11.6x and 74.4x, respectively, and a 4KB bulk zeroing operation by 6.0x and 41.5x, respectively. To efficiently copy data between rows that do not share a row buffer, we propose a second mode of RowClone, the Pipelined Serial Mode, which uses the shared internal bus of a DRAM chip to quickly copy data between two banks. RowClone requires only a 0.01% increase in DRAM chip area. We quantitatively evaluate the benefits of RowClone by focusing on fork, one of the frequently invoked system calls, and five other copy and initialization intensive applications. Our results show that RowClone can significantly improve both single-core and multi-core system performance, while also significantly reducing main memory bandwidth and energy consumption.
The gem5 simulator The gem5 simulation infrastructure is the merger of the best aspects of the M5 [4] and GEMS [9] simulators. M5 provides a highly configurable simulation framework, multiple ISAs, and diverse CPU models. GEMS complements these features with a detailed and exible memory system, including support for multiple cache coherence protocols and interconnect models. Currently, gem5 supports most commercial ISAs (ARM, ALPHA, MIPS, Power, SPARC, and x86), including booting Linux on three of them (ARM, ALPHA, and x86). The project is the result of the combined efforts of many academic and industrial institutions, including AMD, ARM, HP, MIPS, Princeton, MIT, and the Universities of Michigan, Texas, and Wisconsin. Over the past ten years, M5 and GEMS have been used in hundreds of publications and have been downloaded tens of thousands of times. The high level of collaboration on the gem5 project, combined with the previous success of the component parts and a liberal BSD-like license, make gem5 a valuable full-system simulation tool.
Asynchronous Leader Election in Mobile Ad Hoc Networks With the proliferation of portable computing platforms and small wireless devices, the classical dilemma of leader election in mobile ad hoc networks has received attention from the research community in recent years. The problem aims to elect a unique leader among mobile nodes regardless of their physical locations. But, existing distributed leader election algorithms do not cope with highly spontaneous nature of mobile ad hoc networks. This paper presents a consensus-based leader election algorithm that finds a local extrema among the nodes participating in leader election. The algorithm is highly adaptive with ad hoc networks in the sense that it can tolerate intermittent failures, such as link failures, sudden crash or recovery of mobile nodes, network partitions, and merging of connected network components associated with ad hoc networks. The paper also presents proofs of correctness to exhibit the fairness of this algorithm.
A New Approach to the Internally Positive Representation of Linear MIMO Systems The problem of representing linear systems through combination of positive systems is relevant when signal processing schemes, such as filters, state observers, or control laws, are to be implemented using “positive” technologies, such as Charge Routing Networks and fiber optic filters. This problem, well investigated in the SISO case, can be recasted into the more general problem of Internally Positive Representation (IPR) of systems. This paper presents a methodology for the construction of such IPRs for MIMO systems, based on a suitable convex positive representation of complex vectors and matrices. The stability properties of the IPRs are investigated in depth, achieving the important result that any stable system admits a stable IPR of finite dimension. A main algorithm and three variants, all based on the proposed methodology, are presented for the construction of stable IPRs. All of them are straightforward and are characterized by a very low computational cost. The first and second may require a large state-space dimension to provide a stable IPR, while the third and the fourth are aimed to provide stable IPRs of reduced order.
A 0.1–6.0-GHz Dual-Path SDR Transmitter Supporting Intraband Carrier Aggregation in 65-nm CMOS A 4.8-mm –6.0-GHz dual-path software-defined radio transmitter supporting intraband carrier aggregation (CA) in 65-nm CMOS is presented. A simple approach is proposed to support intraband CA signals with only one I-Q baseband path. By utilizing the power-scalable and feedforward compensation techniques, the power of the wideband analog baseband is minimized. The transmitter consists of a high gain-range main path and a low-power subpath to cooperatively cover different standards over 0.1–6.0 GHz with more flexibility. The reconfigurable power amplifier (PA) driver achieves wideband frequency coverage with efficiency-enhanced on-chip transformers and improved switched-capacitor arrays. This transmitter achieves <−50-dBc image rejection ratio and <−40-dBc local oscillating signal leakage after the calibration. System verifications have demonstrated −31/−51-dBc ACLR1/ACLR2 (adjacent channel leakage ratio) at 3-dBm output power for 2.3-GHz LTE20 in the main path and 1.7% error vector magnitude (EVM) at 1.5-dBm output for 1.8-GHz WCDMA in the subpath. Both paths enable SAW-less FDD operations with −153 or −156 dBc/Hz carrier-to-noise ratio at 200-MHz frequency offset. Finally, the dual CA signals with 55-MHz frequency spacing are verified, showing the EVM of 1.2% and 0.8%, respectively, and exhibiting the intraband CA capability.
OMNI: A Framework for Integrating Hardware and Software Optimizations for Sparse CNNs Convolution neural networks (CNNs) as one of today’s main flavor of deep learning techniques dominate in various image recognition tasks. As the model size of modern CNNs continues to grow, neural network compression techniques have been proposed to prune the redundant neurons and synapses. However, prior techniques disconnect the software neural networks compression and hardware acceleration, whi...
1.049934
0.0461
0.0461
0.04
0.04
0.04
0.04
0.0188
0.000012
0
0
0
0
0
Partial Discharge Detection Using a Spherical Electromagnetic Sensor. The presence of a partial discharge phenomenon in an electrical apparatus is a warning signal that could determine the failure of the insulation system, terminating the service of the apparatus and/or the network. In this paper, an innovative partial discharge (PD) measurement instrument based on an antenna sensor is presented and analyzed. Being non-intrusive is one of the most relevant features of the sensor. The frequency response of the antenna sensor and the features to recognize different PD sources and automatically synchronize them with the supply voltage are described and discussed in details. The results show the performance of the instrument can make a fast and correct diagnosis of the health state of insulation systems.
Electrical analogous in viscoelasticity •Mechanical models of materials viscoelasticity behavior are approached by fractional calculus.•Electrical analogous circuits of fractional hereditary materials are proposed.•Validation is demonstrated by using modal analysis.•Electrical analogous can help in better revealing the real behavior of fractional hereditary materials.
Unconditionally stable meshless integration of time-domain Maxwell's curl equations. Grid based methods coupled with an explicit approach for the evolution in time are traditionally adopted in solving PDEs in computational electromagnetics. The discretization in space with a grid covering the problem domain and a stability step size restriction, must be accepted. Evidence is given that efforts need for overcoming these heavy constraints. The connectivity laws among the points scattered in the problem domain can be avoided by using meshless methods. Among these, the smoothed particle electromagnetics, gives an interesting answer to the problem, overcoming the limit of the grid generation. In the original formulation an explicit integration scheme is used providing, spatial and time discretization strictly interleaved and mutually conditioned. In this paper a formulation of the alternating direction implicit scheme is proposed into the meshless framework. The developed formulation preserves the leapfrog marching on in time of the explicit integration scheme. Studies on the systems matrices arising at each temporal step, are reported referring to the meshless discretization. The new method, not constrained by a grid in space and unconditionally stable in time, is validated by numerical simulations.
EMI filter design in motor drives with Common Mode voltage active compensation In this paper the design issues of input electromagnetic interference (EMI) filters for inverter-fed motor drives including motor Common Mode (CM) voltage active compensation are studied. A coordinated design of motor CM-voltage active compensator and input EMI filter allows the drive system to comply with EMC standards and to yield an increased reliability at the same time. Two CM input EMI filters are built and compared. They are, designed, respectively, according to the conventional design procedure and considering the actual impedance mismatching between EMI source and receiver. In both design procedures, the presence of the active compensator is taken into account. The experimental evaluation of both filters' performance is given in terms of compliance of the system to standard limits.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
Achievable rates in cognitive radio channels Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Understanding Availability This paper addresses a simple, yet fundamental question in the design of peer-to-peer systems: What does it mean when we say "availability" and how does this understand- ing impact the engineering of practical systems? We ar- gue that existing measurements and models do not capture the complex time-varying nature of availability in today's peer-to-peer environments. Further, we show that unfore- seen methodological shortcomings have dramatically biased previous analyses of this phenomenon. As the basis of our study, we empirically characterize the availability of a large peer-to-peer system over a period of 7 days, analyze the de- pendence of the underlying availability distributions, mea- sure host turnover in the system, and discuss how these re- sults may affect the design of high-availability peer-to-peer services.
Data Space Randomization Over the past several years, US-CERT advisories, as well as most critical updates from software vendors, have been due to memory corruption vulnerabilities such as buffer overflows, heap overflows, etc. Several techniques have been developed to defend against the exploitation of these vulnerabilities, with the most promising defenses being based on randomization. Two randomization techniques have been explored so far: address space randomization (ASR) that randomizes the location of objects in virtual memory, and instruction set randomization (ISR) that randomizes the representation of code. We explore a third form of randomization called data space randomization (DSR) that randomizes the representation of data stored in program memory. Unlike ISR, DSR is effective against non-control data attacks as well as code injection attacks. Unlike ASR, it can protect against corruption of non-pointer data as well as pointer-valued data. Moreover, DSR provides a much higher range of randomization (typically 232 for 32-bit data) as compared to ASR. Other interesting aspects of DSR include (a) it does not share a weakness common to randomization-based defenses, namely, susceptibility to information leakage attacks, and (b) it is capable of detecting some exploits that are missed by full bounds-checking techniques, e.g., some of the overflows from one field of a structure to the next field. Our implementation results show that with appropriate design choices, DSR can achieve a performance overhead in the range of 5% to 30% for a range of programs.
Online design bug detection: RTL analysis, flexible mechanisms, and evaluation Higher level of resource integration and the addition of new features in modern multi-processors put a significant pressure on their verification. Although a large amount of resources and time are devoted to the verification phase of modern processors, many design bugs escape the verification process and slip into processors operating in the field. These design bugs often lead to lower quality products, lower customer satisfaction, diminishing brand/company reputation, or even expensive product recalls.
IEEE 802.11 wireless LAN implemented on software defined radio with hybrid programmable architecture This paper describes a prototype software defined radio (SDR) transceiver on a distributed and heterogeneous hybrid programmable architecture; it consists of a central processing unit (CPU), digital signal processors (DSPs), and pre/postprocessors (PPPs), and supports both Personal Handy Phone System (PHS), and IEEE 802.11 wireless local area network (WLAN). It also supports system switching between PHS and WLAN and over-the-air (OTA) software downloading. In this paper, we design an IEEE 802.11 WLAN around the SDR; we show the software architecture of the SDR prototype and describe how it handles the IEEE 802.11 WLAN protocol. The medium access control (MAC) sublayer functions are executed on the CPU, while the physical layer (PHY) functions such as modulation/demodulation are processed by the DSPs; higher speed digital signal processes are run on the PPP implemented on a field-programmable gate array (FPGA). The most difficult problem in implementing the WLAN in this way is meeting the short interframe space (SIFS) requirement of the IEEE 802.11 standard; we elucidate the potential weakness of the current configuration and specify a way of implementing the IEEE 802.11 protocol that avoids this problem. This paper also describes an experimental evaluation of the prototype for WLAN use, the results of which agree well with computer-simulation results.
Understanding contention-based channels and using them for defense Microarchitectural resources such as caches and predictors can be used to leak information across security domains. Significant prior work has demonstrated attacks and defenses for specific types of such microarchitectural side and covert channels. In this paper, we introduce a general mathematical study of microarchitectural channels using information theory. Our conceptual contribution is a simple mathematical abstraction that captures the common characteristics of all microarchitectural channels. We call this the Bucket model and it reveals that microarchitectural channels are fundamentally different from side and covert channels in networking. We then quantify the communication capacity of several microarchitectural covert channels (including channels that rely on performance counters, AES hardware and memory buses) and measure bandwidths across both KVM based heavy-weight virtualization and light-weight operating-system level isolation. We demonstrate channel capacities that are orders of magnitude higher compared to what was previously considered possible. Finally, we introduce a novel way of detecting intelligent adversaries that try to hide while running covert channel eavesdropping attacks. Our method generalizes a prior detection scheme (that modeled static adversaries) by introducing noise that hides the detection process from an intelligent eavesdropper.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
0
Observer-based Fuzzy Adaptive Inverse Optimal Output Feedback Control for Uncertain Nonlinear Systems In this article, an observer-based fuzzy adaptive inverse optimal output feedback control problem is studied for a class of nonlinear systems in strict-feedback form. The considered nonlinear systems contain unknown nonlinear dynamics and their states are not measured directly. Fuzzy logic systems are applied to identify the unknown nonlinear dynamics and an auxiliary nonlinear system is construct...
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Stability of switched positive linear systems with average dwell time switching. In this paper, the stability analysis problem for a class of switched positive linear systems (SPLSs) with average dwell time switching is investigated. A multiple linear copositive Lyapunov function (MLCLF) is first introduced, by which the sufficient stability criteria in terms of a set of linear matrix inequalities, are given for the underlying systems in both continuous-time and discrete-time contexts. The stability results for the SPLSs under arbitrary switching, which have been previously studied in the literature, can be easily obtained by reducing MLCLF to the common linear copositive Lyapunov function used for the system under arbitrary switching those systems. Finally, a numerical example is given to show the effectiveness and advantages of the proposed techniques.
Output tracking control for a class of continuous-time T-S fuzzy systems This paper investigates the problem of output tracking for nonlinear systems with actuator fault using interval type-2 (IT2) fuzzy model approach. An IT2 state-feedback fuzzy controller is designed to perform the tracking control problem, where the membership functions can be freely chosen since the number of fuzzy rules is different from that of the IT2 T-S fuzzy model. Based on Lyapunov stability theory, an existence condition of IT2 fuzzy H ∞ output tracking controller is obtained to guarantee that the output of the closed-loop IT2 control system can track the output of a given reference model well in the H ∞ sense. Finally, two illustrative examples are given to demonstrate the effectiveness and merits of the proposed design techniques.
Finite-Time Consensus Tracking Neural Network FTC of Multi-Agent Systems The finite-time consensus fault-tolerant control (FTC) tracking problem is studied for the nonlinear multi-agent systems (MASs) in the nonstrict feedback form. The MASs are subject to unknown symmetric output dead zones, actuator bias and gain faults, and unknown control coefficients. According to the properties of the neural network (NN), the unstructured uncertainties problem is solved. The Nussbaum function is used to address the output dead zones and unknown control directions problems. By introducing an arbitrarily small positive number, the “singularity” problem caused by combining the finite-time control and backstepping design is solved. According to the backstepping design and Lyapunov stability theory, a finite-time adaptive NN FTC controller is obtained, which guarantees that the tracking error converges to a small neighborhood of zero in a finite time, and all signals in the closed-loop system are bounded. Finally, the effectiveness of the proposed method is illustrated via a physical example.
Robust fuzzy tracking control for robotic manipulators In this paper, a stable adaptive fuzzy-based tracking control is developed for robot systems with parameter uncertainties and external disturbance. First, a fuzzy logic system is introduced to approximate the unknown robotic dynamics by using adaptive algorithm. Next, the effect of system uncertainties and external disturbance is removed by employing an integral sliding mode control algorithm. Consequently, a hybrid fuzzy adaptive robust controller is developed such that the resulting closed-loop robot system is stable and the trajectory tracking performance is guaranteed. The proposed controller is appropriate for the robust tracking of robotic systems with system uncertainties. The validity of the control scheme is shown by computer simulation of a two-link robotic manipulator.
A Survey of Reachability and Controllability for Positive Linear Systems. This paper is a survey of reachability and controllability results for discrete-time positive linear systems. It presents a variety of criteria in both algebraic and digraph forms for recognising these fundamental system properties with direct implications not only in dynamic optimization problems (such as those arising in inventory and production control, manpower planning, scheduling and other areas of operations research) but also in studying properties of reachable sets, in feedback control problems, and others. The paper highlights the intrinsic combinatorial structure of reachable/controllable positive linear systems and reveals the monomial components of such systems. The system matrix decomposition into monomial components is demonstrated by solving some illustrative examples.
GloMoSim: a library for parallel simulation of large-scale wireless networks Abstract Anumber,of library-based parallel ,and sequential network,simulators ,have ,been ,designed. This paper describes a library, called GloMoSim (for Global Mobile system Simulator), for parallel simulation of wireless networks. GloMoSim has been designed to be ,extensible and composable: the communication ,protocol stack for wireless networks is divided into a set of layers, each with its own API. Models of protocols at one layer interact with those at a lower (or higher) layer only via these APIs. The modular,implementation,enables consistent comparison,of multiple,protocols ,at a ,given ,layer. The parallel implementation,of GloMoSim ,can be executed ,using a variety of conservative synchronization protocols, which include,the ,null ,message ,and ,conditional ,event algorithms. This paper describes the GloMoSim library, addresses,a number ,of issues ,relevant ,to its parallelization, and presents a set of experimental results onthe IBM 9076 SP, a distributed memory multi- computer. These experiments use models constructed from the library modules. 1,Introduction The,rapid ,advancement ,in portable ,computing platforms and wireless communication,technology has led tosignificant interest in mobile ,computing ,and mobile networking. Two primary forms of mobile ,computing ,are becoming popular: first, mobile computers continue to heavily use wired network infrastructures.Instead of being hardwired to a single location (or IP address), a computer can,dynamically ,move ,to multiple ,locations ,while maintaining,application transparency. Protocols such as
TAG: a Tiny AGgregation service for ad-hoc sensor networks We present the Tiny AGgregation (TAG) service for aggregation in low-power, distributed, wireless environments. TAG allows users to express simple, declarative queries and have them distributed and executed efficiently in networks of low-power, wireless sensors. We discuss various generic properties of aggregates, and show how those properties affect the performance of our in network approach. We include a performance study demonstrating the advantages of our approach over traditional centralized, out-of-network methods, and discuss a variety of optimizations for improving the performance and fault tolerance of the basic solution.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
The Quadrature LC Oscillator: A Complete Portrait Based on Injection Locking We show that the quadrature LC oscillator is best treated as two strongly coupled, nominally identical oscillators that are locked to the same frequency. Differential equations that extend Adler&#39;s description of locking to strong injection reveal the full dynamics of this circuit. With a simplifying insight, the analysis reveals all the modes of the oscillator, their stability, the effects of mism...
Permanent-magnets linear actuators applicability in automobile active suspensions Significant improvements in automobile suspension performance are achieved by active systems. However, current active suspension systems are too expensive and complex. Developments occurring in power electronics, permanent magnet materials, and microelectronic systems justifies analysis of the possibility of implementing electromagnetic actuators in order to improve the performance of automobile suspension systems without excessively increasing complexity and cost. In this paper, the layouts of hydraulic and electromagnetic active suspensions are compared. The actuator requirements are calculated, and some experimental results proving that electromagnetic suspension could become a reality in the future are shown.
SPECS: A Lightweight Runtime Mechanism for Protecting Software from Security-Critical Processor Bugs Processor implementation errata remain a problem, and worse, a subset of these bugs are security-critical. We classified 7 years of errata from recent commercial processors to understand the magnitude and severity of this problem, and found that of 301 errata analyzed, 28 are security-critical. We propose the SECURITY-CRITICAL PROCESSOR ER- RATA CATCHING SYSTEM (SPECS) as a low-overhead solution to this problem. SPECS employs a dynamic verification strategy that is made lightweight by limiting protection to only security-critical processor state. As a proof-of- concept, we implement a hardware prototype of SPECS in an open source processor. Using this prototype, we evaluate SPECS against a set of 14 bugs inspired by the types of security-critical errata we discovered in the classification phase. The evaluation shows that SPECS is 86% effective as a defense when deployed using only ISA-level state; incurs less than 5% area and power overhead; and has no software run-time overhead.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.2
0.2
0.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
MIMO Switched-Capacitor DC-DC Converters Using Only Parasitic Capacitances Through Scalable Parasitic Charge Redistribution. This paper presents a multiple-input multiple-output (MIMO) switched-capacitor (SC) dc-dc converter that only uses the parasitic capacitance already present in fully integrated SC power converters to generate multiple dc voltages. When used in an SC converter together with the scalable parasitic charge redistribution technique, the presented MIMO converter provides additional voltage rails, which ...
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Analysis of Direct-Conversion IQ Transmitters With 25% Duty-Cycle Passive Mixers. The performance of direct-conversion IQ transmitters with 25% duty-cycle passive mixers is analyzed. The up-conversion transfer function is calculated and it is shown that due to lack of reverse isolation of the passive mixer, the high- and low-side conversion gains can be different. The contribution of thermal noise from mixer switches to the total output noise of the transmitter is formulated. I...
A 28-GHz 32-Element TRX Phased-Array IC With Concurrent Dual-Polarized Operation and Orthogonal Phase and Gain Control for 5G Communications. This paper presents the first reported 28-GHz phased-array IC for 5G communications. Implemented in 130-nm SiGe BiCMOS, the IC includes 32 TRX elements and features concurrent independent beams in two polarizations in either TX or RX operation. Circuit techniques to enable precise beam steering, orthogonal phase and amplitude control at each front end, and independent tapering and beam steering at...
A 28-GHz CMOS Direct Conversion Transceiver With Packaged 2 × 4 Antenna Array for 5G Cellular System. This paper describes a 28-GHz CMOS direct conversion transceiver with packaged 2 × 4 patch antenna array for 5G communication. Beamforming antenna and reconfigurable transceiver architecture are used for high effective isotropic radiated power (EIRP). For low error vector magnitude (EVM), switchless matching transmitter (Tx)/receiver (Rx) to antenna and 28-GHz injection-locked local oscillator (LO...
Second-Order Equivalent Circuits for the Design of Doubly-Tuned Transformer Matching Networks. The doubly-tuned magnetic transformer, comprising coupled inductors shunted by capacitors, is today widely in use as interstage network and for impedance matching in silicon millimeter waves amplifiers. It provides several advantages, compared with simple LC resonators, but the design is made complex by the high order of the network, featuring multiple resonances, and by the large number of compon...
2.1 mm-Wave 5G Radios: Baseband to Waves There are many challenges in building millimeter-Wave (mmW) 5G radios [1] -[3]. Some of the key challenges are the cost, heat dissipation, and array calibration. This paper describes ADI's full line-up of mmW 5G radios used today, with a focus on the millimeter wave front-end portion, and how it addresses some of these challenges. The radio block diagram, shown in Fig. 2.1.1, is an example of a dual-polarized 24-to-30GHz band mmW radio. All ICs in this radio cover 24 to 30GHz, allowing the same chips to be used in n257, n258, and n261 radios, which reduces the development cost. The radio consists of two domains: BB-IF and mmW. The BB-IF domain contains either an IF transceiver utilizing quadrature baseband data converters and mixers to generate the IF, or data converters (MxFE) to directly synthesize the IF. The former is optimal for narrower bandwidth applications while the latter consumes more power but can support higher bandwidths. The mmW domain consists of a mmW Up/Down converter and a 16-channel, (2 polarizations ×8 channels per pol) high-performance beamformer (BF). The mmW chips utilize a 45nm RF SOI process, which is optimized for RF performance at the mmW 5G bands. The SOI process is a 12-inch process, hence economically suitable for large-volume applications. The BF linear output power is 12dBm/channel @ 3% EVM using a 400MHz 5G NR waveform. The channel P1dB is 20dBm. Two mmW BFs, cover 24-to-30 and 37-to-44GHz bands, respectively. An implementation of the mmW front-end, consisting of 128 dual-polarized antenna elements, 16 BFs, 4 Up/Down frequency converters (UDCs), and the power-management circuitry has been fabricated and is shown in Fig. 2.1.2. Over-the-air (OTA) measurement results of the fabricated array are presented below. The measurements include the radiation pattern, EIRP, linearity, and combined throughput of four streams. The paper also discusses the following: OTA Performance of ADI's mmW 5G radios, antenna performance and design aspects, thermal aspects and heat dissipation modelling, and calibration of mmW radios.
mm-Wave Mixer-First Receiver With Selective Passive Wideband Low-Pass Filtering The front-end of a conventional millimeter-wave receiver (RX) consists of a bandpass filter (BPF) and low-noise amplifier (LNA) prior to the frequency down-conversion mixer. In interference-limited wireless channels, it is more footprint and power efficient to remove the BPF and LNA, and realize the mixer as a “passive” switching scheme. The noise figure of this passive mixer-first RX is improved ...
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Measuring the Gap Between FPGAs and ASICs ABSTRACT This paper presents experimental measurements of the differences between a 90nm CMOS FPGA and 90nm CMOS Standard Cell ASICs in terms of logic density, circuit speed and power consumption. We are motivated to make these measurements to enable system designers to make better informed choices between these two media and to give insight to FPGA makers on the deciencies to attack and thereby improve FPGAs. In the paper, we describe the methodology by which the measurements were obtained and we show that, for circuits containing only combinational logic and,ipops, the ratio of silicon area required to implement them in FPGAs and ASICs is on average 40. Modern FPGAs also contain \hard" blocks such as multiplier/accumulators and block memories,and we nd,that these blocks reduce this average area gap signican tly to as little as 21. The ratio of critical path delay, from FPGA to ASIC, is roughly 3 to 4, with less inuence from block memory and hard multipliers. The dynamic power consumption ratio is approximately 12 times and, with hard blocks, this gap generally becomes smaller. Categories and Subject Descriptors
Termination detection for diffusing computations
Distributed multi-agent optimization with state-dependent communication We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. We study a projected multi-agent subgradient algorithm under state-dependent communication. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a “disagreement metric” between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence.
Practical Mitigations for Timing-Based Side-Channel Attacks on Modern x86 Processors This paper studies and evaluates the extent to which automated compiler techniques can defend against timing-based side-channel attacks on modern x86 processors. We study how modern x86 processors can leak timing information through side-channels that relate to control flow and data flow. To eliminate key-dependent control flow and key-dependent timing behavior related to control flow, we propose the use of if-conversion in a compiler backend, and evaluate a proof-of-concept prototype implementation. Furthermore, we demonstrate two ways in which programs that lack key-dependent control flow and key-dependent cache behavior can still leak timing information on modern x86 implementations such as the Intel Core 2 Duo, and propose defense mechanisms against them.
A decentralized modular control framework for robust control of FES-activated walker-assisted paraplegic walking using terminal sliding mode and fuzzy logic control. A major challenge to developing functional electrical stimulation (FES) systems for paraplegic walking and widespread acceptance of these systems is the design of a robust control strategy that provides satisfactory tracking performance. The systems need to be robust against time-varying properties of neuromusculoskeletal dynamics, day-to-day variations, subject-to-subject variations, external dis...
CCFI: Cryptographically Enforced Control Flow Integrity Control flow integrity (CFI) restricts jumps and branches within a program to prevent attackers from executing arbitrary code in vulnerable programs. However, traditional CFI still offers attackers too much freedom to chose between valid jump targets, as seen in recent attacks. We present a new approach to CFI based on cryptographic message authentication codes (MACs). Our approach, called cryptographic CFI (CCFI), uses MACs to protect control flow elements such as return addresses, function pointers, and vtable pointers. Through dynamic checks, CCFI enables much finer-grained classification of sensitive pointers than previous approaches, thwarting all known attacks and resisting even attackers with arbitrary access to program memory. We implemented CCFI in Clang/LLVM, taking advantage of recently available cryptographic CPU instructions (AES-NI). We evaluate our system on several large software packages (including nginx, Apache and memcache) as well as all their dependencies. The cost of protection ranges from a 3--18% decrease in server request rate. We also expect this overhead to shrink as Intel improves the performance AES-NI.
An Event-Driven Quasi-Level-Crossing Delta Modulator Based on Residue Quantization This article introduces a digitally intensive event-driven quasi-level-crossing (quasi-LC) delta-modulator analog-to-digital converter (ADC) with adaptive resolution (AR) for Internet of Things (IoT) wireless networks, in which minimizing the average sampling rate for sparse input signals can significantly reduce the power consumed in data transmission, processing, and storage. The proposed AR quasi-LC delta modulator quantizes the residue voltage signal with a 4-bit asynchronous successive-approximation-register (SAR) sub-ADC, which enables a straightforward implementation of LC and AR algorithms in the digital domain. The proposed modulator achieves data compression by means of a globally signal-dependent average sampling rate and achieves AR through a digital multi-level comparison window that overcomes the tradeoff between the dynamic range and the input bandwidth in the conventional LC ADCs. Engaging the AR algorithm reduces the average sampling rate by a factor of 3 at the edge of the modulator’s signal bandwidth. The proposed modulator is fabricated in 28-nm CMOS and achieves a peak SNDR of 53 dB over a signal bandwidth of 1.42 MHz while consuming 205 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> and an active area of 0.0126 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> .
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
Design of Multistandard Channelization Accelerators for Software Defined Radio Handsets This paper presents a novel multistandard channelization accelerator design methodology for the digital front-end of a software defined radio (SDR) handset. Dedicated hardware (HW) accelerator cores have a power efficiency which is several orders higher than a software implementation and hence, have been extensively used for accelerating the computationally intensive tasks like channelization. However, these cores are generally inflexible and optimized for a single standard. The growing need for supporting multiple wireless standards with heterogeneous throughput and mobility requirements in a small form factor mobile handset with a limited silicon area, requires the accelerator cores to be flexible and reusable in addition to being power efficient. The proposed methodology exploits commonalities in the channelization specifications to hardwire and reuse a significant portion of the accelerator, across multiple standards. The resulting accelerator is area efficient and scalable for supporting an arbitrary number of standards.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Social big data: Recent achievements and new challenges •The paper presents the methodologies on information fusion for social media.•The methodologies, frameworks, and software used to work with big data are given.•The state of the art in the data analytic techniques on social big data is provided.•Social big data applications for various domains are described and analyzed.
scikit-image: Image processing in Python. scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image.
The rise of "big data" on cloud computing: Review and open research issues. Cloud computing is a powerful technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Massive growth in the scale of data or big data generated through cloud computing has been observed. Addressing big data is a challenging and time-demanding task that requires a large computational infrastructure to ensure successful data processing and analysis. The rise of big data in cloud computing is reviewed in this study. The definition, characteristics, and classification of big data along with some discussions on cloud computing are introduced. The relationship between big data and cloud computing, big data storage systems, and Hadoop technology are also discussed. Furthermore, research challenges are investigated, with focus on scalability, availability, data integrity, data transformation, data quality, data heterogeneity, privacy, legal and regulatory issues, and governance. Lastly, open research issues that require substantial research efforts are summarized.
Next-Generation Big Data Analytics: State of the Art, Challenges, and Future Research Topics. The term big data occurs more frequently now than ever before. A large number of fields and subjects, ranging from everyday life to traditional research fields (i.e., geography and transportation, biology and chemistry, medicine and rehabilitation), involve big data problems. The popularizing of various types of network has diversified types, issues, and solutions for big data more than ever befor...
Trends in transportation and logistics. •Overview of the historical contributions of Operational Research to problems in transportation and logistics.•Future trends in transportation and logistics.•Future potential contributions of Operational Research to problems in transportation and logistics.
Unlocking the power of big data in new product development. This study explores how big data can be used to enable customers to express unrecognised needs. By acquiring this information, managers can gain opportunities to develop customer-centred products. Big data can be defined as multimedia-rich and interactive low-cost information resulting from mass communication. It offers customers a better understanding of new products and provides new, simplified modes of large-scale interaction between customers and firms. Although previous studies have pointed out that firms can better understand customers’ preferences and needs by leveraging different types of available data, the situation is evolving, with increasing application of big data analytics for product development, operations and supply chain management. In order to utilise the customer information available from big data to a larger extent, managers need to identify how to establish a customer-involving environment that encourages customers to share their ideas with managers, contribute their know-how, fiddle around with new products, and express their actual preferences. We investigate a new product development project at an electronics company, STE, and describe how big data is used to connect to, interact with and involve customers in new product development in practice. Our findings reveal that big data can offer customer involvement so as to provide valuable input for developing new products. In this paper, we introduce a customer involvement approach as a new means of coming up with customer-centred new product development.
Efficient closed high-utility pattern fusion model in large-scale databases High-Utility Itemset Mining (HUIM) is considered a major issue in recent decades since it reveals profit strategies for use in industry for decision-making. Most existing works have focused on mining high-utility itemsets from databases showing large amount of patterns; however exact decisions are still challenging to make from that large amounts of discovered knowledge. Closed High-utility itemset mining (CHUIM) provides a smart way to present concise high-utility itemsets that can be more effective for making correct decisions. However, none of the existing works have focused on handling large-scale databases to integrate discovered knowledge from several distributed databases. In this paper, we first present a large-scale information fusion architecture to integrate discovered closed high-utility patterns from several distributed databases. The generic composite model is used to cluster transactions regarding their relevant correlation that can ensure correctness and completeness of the fusion model. The well-known MapReduce framework is then deployed in the developed DFM-Miner algorithm to handle big datasets for information fusion and integration. Experiments are then compared to the state-of-the-art CHUI-Miner and CLS-Miner algorithms for mining closed high-utility patterns and the results indicated that the designed model is well designed for handling large-scale databases with less memory usage. Moreover, the designed MapReduce framework can speed up the mining performance of closed high-utility patterns in the developed fusion system.
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
A Logic-in-Memory Computer If, as presently projected, the cost of microelectronic arrays in the future will tend to reflect the number of pins on the array rather than the number of gates, the logic-in-memory array is an extremely attractive computer component. Such an array is essentially a microelectronic memory with some combinational logic associated with each storage element. A logic-in-memory computer is described that is organized around a logic-enhanced ``cache'' memory array. Used as a cache, a logic-in-memory array performs as a high-speed buffer between a conventional CPU and a conventional memory. The effect on the computer system of the cache and its control mechanism is to make the main memory appear to have all of the processing capabilities and almost the same performance as the cache. Operations within the array are naturally organized as operations on blocks of data called ``sectors.'' Among the operations that can be performed are arithmetic and logical operations on pairs of elements from two sectors, and a variety of associative search operations on a single sector. For such operations, the main memory of the computer appears to the program to be composed of a collection of logic-in-memory arrays, each the size of a sector. Because of the high-speed, highly parallel sector operations, the logic-in-memory computer points to a new direction for achieving orders of magnitude increase in computer performance. Moreover, since the computer is specifically organized for large-scale integration, the increased performance might be obtained for a comparatively small dollar cost.
Communication-efficient leader election and consensus with limited link synchrony We study the degree of synchrony required to implement the leader election failure detector Ω and to solve consensus in partially synchronous systems. We show that in a system with n processes and up to f process crashes, one can implement Ω and solve consensus provided there exists some (unknown) correct process with f outgoing links that are eventually timely. In the special case where f = 1 , an important case in practice, this implies that to implement Ω and solve consensus it is sufficient to have just one eventually timely link -- all the other links in the system, Θ(n2) of them, may be asynchronous. There is no need to know which link p → q is eventually timely, when it becomes timely, or what is its bound on message delay. Surprisingly, it is not even required that the source p or destination q of this link be correct: either p or q may actually crash, in which case the link p → q is eventually timely in a trivial way, and it is useless for sending messages. We show that these results are in a sense optimal: even if every process has f - 1 eventually timely links, neither Ω nor consensus can be solved. We also give an algorithm that implements Ω in systems where some correct process has f outgoing links that are eventually timely, such that eventually only f links carry messages, and we show that this is optimal. For f = 1 , this algorithm ensures that all the links, except for one, eventually become quiescent.
Design of a Pressure Control System With Dead Band and Time Delay This paper investigates the control of pressure in a hydraulic circuit containing a dead band and a time varying delay. The dead band is considered as a linear term and a perturbation. A sliding mode controller is designed. Stability conditions are established by making use of Lyapunov Krasovskii functionals, non-perfect time delay estimation is studied and a condition for the effect of uncertainties on the dead zone on stability is derived. Also the effect of different LMI formulations on conservativeness is studied. The control law is tested in practice.
A 13-b 40-MSamples/s CMOS pipelined folding ADC with background offset trimming Two key concepts of pipelining and background offset trimming are applied to demonstrate a 13-b 40-MSamples/s CMOS analog-to-digital converter (ADC) based on the basic folding and interpolation architecture. Folding amplifier stages made of simple differential pairs are pipelined using distributed interstage track-and-holders. Background offset trimming implemented with a highly oversampling delta-sigma modulator enhances the resolution of the CMOS folders beyond 12 bits. The background offset trimming circuit continuously measures and adjusts the offsets of the folding amplifiers without interfering with the normal operation. The prototype system is further refined using subranging and digital correction, and exhibits a spurious-free dynamic range (SFDR) of 82 dB at 40 MSamples/s. The measured differential nonlinearity (DNL) and integral nonlinearity (INL) are about /spl plusmn/0.5 and /spl plusmn/2.0 LSB, respectively. The chip fabricated in 0.5-/spl mu/m CMOS occupies 8.7 mm/sup 2/ and consumes 800 mW at 5 V.
High Frequency Buck Converter Design Using Time-Based Control Techniques Time-based control techniques for the design of high switching frequency buck converters are presented. Using time as the processing variable, the proposed controller operates with CMOS-level digital-like signals but without adding any quantization error. A ring oscillator is used as an integrator in place of conventional opamp-RC or G m-C integrators while a delay line is used to perform voltage to time conversion and to sum time signals. A simple flip-flop generates pulse-width modulated signal from the time-based output of the controller. Hence time-based control eliminates the need for wide bandwidth error amplifier, pulse-width modulator (PWM) in analog controllers or high resolution analog-to-digital converter (ADC) and digital PWM in digital controllers. As a result, it can be implemented in small area and with minimal power. Fabricated in a 180 nm CMOS process, the prototype buck converter occupies an active area of 0.24 mm2, of which the controller occupies only 0.0375 mm2. It operates over a wide range of switching frequencies (10-25 MHz) and regulates output to any desired voltage in the range of 0.6 V to 1.5 V with 1.8 V input voltage. With a 500 mA step in the load current, the settling time is less than 3.5 μs and the measured reference tracking bandwidth is about 1 MHz. Better than 94% peak efficiency is achieved while consuming a quiescent current of only 2 μA/MHz.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
Stable Leader Election We introduce the notion of stable leader election and derive several algorithms for this problem. Roughly speaking, a leader election algorithm is stable if it ensures that once a leader is elected, it remains the leader for as long as it does not crash and its links have been behaving well, irrespective of the behavior of other processes and links. In addition to being stable, our leader election algorithms have several desirable properties. In particular, they are all communication-efficient,i.e., they eventually use only n links to carry messages, and they are robust, i.e., they work in systems where only the links to/from some correct process are required to be eventually timely. Moreover, our best leader election algorithm tolerates message losses, and it ensures that a leader is elected in constant time when the system is stable. We conclude the paper by applying the above ideas to derive a robust and efficient algorithm for the eventually perfect failure detector ! P.
A Clustering Scheme For Hierarchical Control In Mufti-Hop Wireless Networks In this paper we present a clustering scheme to create a hierarchical control structure for mufti-hop wireless networks. A cluster is defined as a subset of vertices, whose induced graph is connected. In addition, a cluster is required to obey certain constraints that are useful for management and scalability of the hierarchy. All these constraints cannot be met simultaneously for general graphs, but we show how such a clustering can be obtained for wireless network topologies. Finally, we present an efficient distributed implementation of our clustering algorithm for a set of wireless nodes to create the set of desired clusters.
A Mobility Based Metric for Clustering in Mobile Ad Hoc Networks Abstract: This paper presents a novel relative mobility metric for mobile ad hoc networks (MANETs). It is based on the ratio of power levels due to successive receptions at each node from its neighbors. We propose a distributed clustering algorithm, MOBIC, based on the use of this mobility metric for selection of clusterheads, and demonstrate that it leads to more stable cluster formation than the "least clusterhead change" version of the well known Lowest-ID clustering algorithm [3]. We show reduction of as much as 33% in the rate of clusterhead changes owing to the use of the proposed technique. In a MANET that uses scalable cluster-based services, network performance metrics such as throughput and delay are tightly coupled with the frequency of cluster reorganization. Therefore, we believe that using MOBIC can result in a more stable configuration, and thus yield better performance.
Reliable MAC layer multicast in IEEE 802.11 wireless networks Multicast/broadcast is an important service primitive in networks. The IEEE 802.11 multicast/broadcast protocol is based on the basic access procedure of Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA). This protocol does not provide any media access control (MAC) layer recovery on multicast/broadcast frames. As a result, the reliability of the multicast/broadcast service is reduced due to the increased probability of lost frames resulting from interference or collisions. In this paper, we propose a reliable Batch Mode Multicast MAC protocol, BMMM, which substentially reduces the number of contention phases, thus considerably reduces the time required for a multicast/broadcast. We then propose a Location Aware Multicast MAC protocol, LAMM, that uses station location information to further improve upon BMMM. Extensive analysis and simulation results validate the reliability and efficiency of our multicast MAC protocols.
Eventual Leader Election with Weak Assumptions on Initial Knowledge, Communication Reliability, and Synchrony This paper considers the eventual leader election problem in asynchronous message-passing systems where an arbitrary number t of processes can crash (t < n, where n is the total number of processes). It considers weak assumptions both on the initial knowledge of the processes and on the network behavior. More precisely, initially, a process knows only its identity and the fact that the process identities are difierent and totally ordered (it knows neither n nor t). Two eventual leader election protocols and a lower bound are presented. The flrst protocol assumes that a process also knows a lower bound fi on the number of processes that do not crash. This protocol requires the following behavioral properties from the underlying network: the graph made up of the correct processes and fair lossy links is strongly connected, and there is a correct process connected to (n ¡ f) ¡ fi other correct processes (where f is the actual number of crashes in the considered run) through eventually timely paths (paths made up of correct processes and eventually timely links). This protocol is not communication-e-cient in the sense that each correct process has to send messages forever. The second protocol is communication-e-cient: after some time, only the flnal common leader has to send messages forever. This protocol does not require the processes to know fi, but requires stronger properties from the underlying network: each pair of correct processes has to be connected by fair lossy links (one in each direction), and there is a correct process whose n ¡ f ¡ 1 output links to the rest of correct processes have to be eventually timely. A matching lower bound result shows that any eventual leader election protocol must have runs with this number of eventually timely links, even if all processes know all the processes identities. In addition to being communication-e-cient, the second protocol has another noteworthy e-ciency property, namely, be the run flnite or inflnite, all the local variables and message flelds have a flnite domain in the run.
Asynchronous implementation of failure detectors Unreliable failure detectors introduced by Chandra and Toueg are abstract mechanisms that provide information on process failures. On the one hand, failure detectors allow to state the minimal requirements on process failures that al- low to solve problems that cannot be solved in purely asyn- chronous systems. But, on the other hand, they cannot be implemented in such systems: their implementation requires that the underlying distributed system be enriched with ad- ditional assumptions. The usual failure detector implemen- tations rely on additional synchrony assumptions (e.g., par- tial synchrony). This paper proposes a new look at the implementation of failure detectors and more specifically at Chandra-Toueg's failure detectors. The proposed approach does not rely on synchrony assumptions (e.g., it allows the communication delays to always increase). It is based on a query-response mechanism and assumes that the query/response messages exchanged obey a pattern where the responses from some processes to a query arrive among the (n − f ) first ones (n being the total number of processes, f the maximum number of them that can crash, with 1 ≤ f< n). When we consider the particular case f =1 , and the implementation of a failure detector of the class denoted S (the weakest class that allows to solve the consensus problem), the additional assumption the underlying system has to satisfy boils down to a simple channel property, namely, there is eventually a pair of processes (pi ,p j) such that the channel connecting them is never the slowest among the channels connecting pi or pj to the other processes. A probabilistic analysis shows that this requirement is practically met in asynchronous distributed systems.
Communication-efficient leader election in crash-recovery systems Abstract: This work addresses the leader election problem in partially synchronous distributed systems where processes can crash and recover. More precisely, it focuses on implementing the Omega failure detector class, which provides a leader election functionality, in the crash-recovery failure model. The concepts of communication efficiency and near-efficiency for an algorithm implementing Omega are defined. Depending on the use or not of stable storage, the property satisfied by unstable processes, i.e., those that crash and recover infinitely often, varies. Two algorithms implementing Omega are presented. In the first algorithm, which is communication-efficient and uses stable storage, eventually and permanently unstable processes agree on the leader with correct processes. In the second algorithm, which is near-communication-efficient and does not use stable storage, processes start their execution with no leader in order to avoid the disagreement among unstable processes, that will agree on the leader with correct processes after receiving a first message from the leader.
Map construction and exploration by mobile agents scattered in a dangerous network We consider the map construction problem in a simple, connected graph by a set of mobile computation entities or agents that start from scattered locations throughout the graph. The problem is further complicated by dangerous elements, nodes and links, in the graph that eliminate agents traversing or arriving at them. The agents working in the graph communicate using a limited amount of storage at each node and work asynchronously. We present a deterministic algorithm that solves the exploration and map construction problems. The end result is also a rooted spanning tree and the election of a leader. The total cost of the algorithm is O(ns m) total number of moves, where m is the number of links in the network and ns is the number of safe nodes, improving the existing O(m2) bound.
Coordination of groups of mobile autonomous agents using nearest neighbor rules Abstract In a recent Physical Review Letters article, Vicsek - et al propose a simple but compelling discrete - time model of the coordination of groups of mobile autonomous agents autonomous agents (i e , points or particles) all moving in the plane Included here is the work of Czirok et al with the same speed but with different headings Each agent's one heading is updated using a local rule based on the average of its own heading plus the headings of its "neighbors " In their paper, behavior as Vicsek's Vicsek et al provide simulation results which demonstrate that a continuous "hydrodynamic" model of the group of agents, the nearest neighbor rule they are studying can cause all agents while other authors such as Mikhailov and Zanette [6] consider to eventually move in the same direction despite the absence of the behavior of populations of self propelled particles with centralized coordination and despite the fact that each agent's long range interactions set of nearest neighbors change with time as the system evolves This paper provides a theoretical explanation for this observed between individual self behavior In addition, convergence results are derived for several tion other similarly inspired models The Vicsek model proves to be systems, Grünbaum and Okubo use statistical methods to a graphic example of a switched linear system which is stable, analyze group behavior in animal aggregations [8] but for which there does not exist a common quadratic Lyapunov and, for example, the work reported in [9] function large literature in the biological sciences focusing on many Cooperative control, graph theory, infinite prod
Wireless Communications Transmitter Performance Enhancement Using Advanced Signal Processing Algorithms Running in a Hybrid DSP/FPGA Platform This paper deals with digital base band signal processing algorithms, which are seen as enabling technologies for software-enabled radios, that are intended for the correction of the analog front end. In particular, this paper focuses on the design, optimization and testability of predistortion functions suitable for the linearization of narrowband and wideband transmitters developed with a hybrid DSP/FPGA platform. To select the best algorithm for the identification of the predistortion function, singular value decomposition, recursive least squares (RLS), and QR-RLS algorithms are implemented on the same digital signal processor; and, the computation complexity, time, accuracy and the required resources are studied. The hardware implementation of the predistortion function is then carefully performed, in order to meet the real time execution requirements.
Software radio architecture: a mathematical perspective As the software radio makes its transition from research to practice, it becomes increasingly important to establish provable properties of the software radio architecture on which product developers and service providers can base technology insertion decisions. Establishing provable properties requires a mathematical perspective on the software radio architecture. This paper contributes to that perspective by critically reviewing the fundamental concept of the software radio, using mathematical models to characterize this rapidly emerging technology in the context of similar technologies like programmable digital radios. The software radio delivers dynamically defined services through programmable processing capacity that has the mathematical structure of the Turing machine. The bounded recursive functions, a subset of the total recursive functions, are shown to be the largest class of Turing-computable functions for which software radios exhibit provable stability in plug-and-play scenarios. Understanding the topological properties of the software radio architecture promotes plug-and-play applications and cost-effective reuse. Analysis of these topological properties yields a layered distributed virtual machine reference model and a set of architecture design principles for the software radio. These criteria may be useful in defining interfaces among hardware, middleware, and higher level software components that are needed for cost-effective software reuse
A 41-phase switched-capacitor power converter with 3.8mV output ripple and 81% efficiency in baseline 90nm CMOS.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.020061
0.015765
0.015765
0.013224
0.01264
0.008834
0.006499
0.00025
0
0
0
0
0
0
Best path in mountain environment based on parallel A* algorithm and Apache Spark Pathfinding problem has several applications in our life and widely used in virtual environments. It has different goals such as shortest path, secure path, or optimal path. Pathfinding problem deals with a large amount of data since it considers every point located in 2D or 3D scenes. The number of possibilities in such a problem is huge. Moreover, it depends on determining standards of best path definition. In this paper, we introduce a parallel A* algorithm to find the optimal path using Apache Spark. The proposed algorithm is evaluated in terms of runtime, speedup, efficiency, and cost on a generated dataset with different sizes (small, medium, and large). The generated dataset considers real terrain challenges, such as the slope and obstacles. Hadoop Insight cluster provided by Azure has been used to run the application. The proposed algorithm reached a speedup up to 4.85 running on six worker nodes.
Unmanned Aerial Vehicle Path Planning Based On A* Algorithm And Its Variants In 3d Environment Finding a safe and optimum path from the source node to the target node, while preventing collisions with environmental obstacles, is always a challenging task. This task becomes even more complicated when the application area includes Unmanned Aerial Vehicle (UAV). This is because UAV follows an aerial path to reach the target node from the source node and the aerial paths are defined in 3D space. A* (A-star) algorithm is the path planning strategy of choice to solve path planning problem in such scenarios because of its simplicity in implementation and promise of optimality. However, A* algorithm guarantees to find the shortest path on graphs but does not guarantee to find the shortest path in a real continuous environment. Theta* (Theta-star) and Lazy Theta* (Lazy Theta-star) algorithms are variants of the A* algorithm that can overcome this shortcoming of the A* algorithm at the cost of an increase in computational time. In this research work, a comparative analysis of A-star, Theta-star, and Lazy Theta-star path planning strategies is presented in a 3D environment. The ability of these algorithms is tested in 2D and 3D scenarios with distinct dimensions and obstacle complexity. To present comparative performance analysis of considered algorithms two performance metrices are used namely computational time which is a measure of time taken to generate the path and path length which represents the length of the generated path.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
The GPU Computing Era GPU computing is at a tipping point, becoming more widely used in demanding consumer applications and high-performance computing. This article describes the rapid evolution of GPU architectures—from graphics processors to massively parallel many-core multiprocessors, recent developments in GPU computing architectures, and how the enthusiastic adoption of CPU+GPU coprocessing is accelerating parallel applications.
Improved delay-dependent stability criteria for time-delay systems This note provides an improved asymptotic stability condi- tion for time-delay systems in terms of a strict linear matrix inequality. Unlike previous methods, the mathematical development avoids bounding certain cross terms which often leads to conservatism. When time-varying norm-bounded uncertainties appear in a delay system, an improved robust delay-dependent stability condition is also given. Examples are provided to demonstrate the reduced conservatism of the proposed conditions. Index Terms—Delay-dependent condition, linear matrix inequality (LMI), time-delay systems, uncertain systems.
Friends and neighbors on the Web The Internet has become a rich and large repository of information about us as individuals. Anything from the links and text on a user’s homepage to the mailing lists the user subscribes to are reflections of social interactions a user has in the real world. In this paper we devise techniques and tools to mine this information in order to extract social networks and the exogenous factors underlying the networks’ structure. In an analysis of two data sets, from Stanford University and the Massachusetts Institute of Technology (MIT), we show that some factors are better indicators of social connections than others, and that these indicators vary between user populations. Our techniques provide potential applications in automatically inferring real world connections and discovering, labeling, and characterizing communities.
On the time-complexity of broadcast in multi-hop radio networks: an exponential gap between determinism and randomization The time-complexity of deterministic and randomized protocols for achieving broadcast (distributing a message from a source to all other nodes) in arbitrary multi-hop radio networks is investigated. In many such networks, communication takes place in synchronous time-slots. A processor receives a message at a certain time-slot if exactly one of its neighbors transmits at that time-slot. We assume no collision-detection mechanism; i.e., it is not always possible to distinguish the case where no neighbor transmits from the case where several neighbors transmit simultaneously. We present a randomized protocol that achieves broadcast in time which is optimal up to a logarithmic factor. In particular, with probability 1 --E, the protocol achieves broadcast within O((D + log n/s) 'log n) time-slots, where n is the number of processors in the network and D its diameter. On the other hand, we prove a linear lower bound on the deterministic time-complexity of broadcast in this model. Namely, we show that any deterministic broadcast protocol requires 8(n) time-slots, even if the network has diameter 3, and n is known to all processors. These two results demonstrate an exponential gap in complexity between randomization and determinism.
Gossiping and Broadcasting versus Computing Functions in Networks In the theory of dissemination of information in interconnection networks (gossiping and broadcasting) one assumes that a message consists of a set of distinguishable, atomic pieces of information, and that one communication pattern is used for solving a task. In this paper, a close connection is established between this theory and a situation in which functions are computed in synchronous networks without restrictions on the type of message used and with possibly different communication patterns for different inputs. The following restriction on the way processors communicate turns out to be essential: (*) "Predictable reception": At the beginning of a step a processor knows whether it is to receive a message across one of its links or not. We show that if (*) holds then computing an n-ary function with a "critical input" (e.g., the OR of n bits) and distributing the result to all processors on an n-processor network G takes exactly as long as performing gossiping in G. Further we study the complexity of broadcasting one bit in a synchronous network, assuming that in one step a processor can send only one message, but without assuming (*), and broadcasting one bit on parallel random-access machines (PRAMs) and distributed memory machines (DMMs) with the ARBITRARY access resolution rule.
Power saving of a dynamic width controller for a monolithic current-mode CMOS DC-DC converter We propose the dynamic power MOS width controlling technique and the adaptive gate driver voltage technique to find out the better approach to power saving in DC-DC converters. It demonstrates that the dynamic power MOS width controlling technique has much improvement in power consumption than that of the adaptive gate driver voltage technique when the load current is heavy or light. After the dynamic power MOS width modification, the simulation results show that the efficiency of current-mode DC-DC buck converter can be improved from 92% to about 98% in heavy load and from 15% to about 16.3% in light load. However, the adaptive gate driver voltage technique has only little improvement of power saving. It means that the dynamic width controller is the better approach to power saving in the DC-DC converter.
A 0.5-V 2.5-GHz high-gain low-power regenerative amplifier based on Colpitts oscillator topology in 65-nm CMOS This paper proposes the regenerative amplifier based on the Colpitts oscillator topology. The positive feedback amount was optimized analytically in the circuit design. The proposed regenerative amplifier was fabricated in 65 nm CMOS technology. The measurement results showed 28.7 dB gain and 6.4 dB noise figure at 2.55 GHz while consuming 120 μW under the 0.5-V power supply.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.2
0.1
0
0
0
0
0
0
0
0
0
0
0
0
Halcyon: An Accurate Basecaller Exploiting An Encoder-Decoder Model With Monotonic Attention Motivation: In recent years, nanopore sequencing technology has enabled inexpensive long-read sequencing, which promises reads longer than a few thousand bases. Such long-read sequences contribute to the precise detection of structural variations and accurate haplotype phasing. However, deciphering precise DNA sequences from noisy and complicated nanopore raw signals remains a crucial demand for downstream analyses based on higher-quality nanopore sequencing, although various basecallers have been introduced to date.Results: To address this need, we developed a novel basecaller, Halcyon, that incorporates neural-network techniques frequently used in the field of machine translation. Our model employs monotonic-attention mechanisms to learn semantic correspondences between nucleotides and signal levels without any pre-segmentation against input signals. We evaluated performance with a human whole-genome sequencing dataset and demonstrated that Halcyon outperformed existing third-party basecallers and achieved competitive performance against the latest Oxford Nanopore Technologies' basecallers.
SWIFOLD: Smith-Waterman implementation on FPGA with OpenCL for long DNA sequences. The results suggest that SWIFOLD can be a serious contender for accelerating the SW alignment of DNA sequences of unrestricted size in an affordable way reaching on average 125 GCUPS and almost a peak of 270 GCUPS.
GSWABE: faster GPU-accelerated sequence alignment with optimal alignment retrieval for short DNA sequences In this paper, we present GSWABE, a graphics processing unit GPU-accelerated pairwise sequence alignment algorithm for a collection of short DNA sequences. This algorithm supports all-to-all pairwise global, semi-global and local alignment, and retrieves optimal alignments on Compute Unified Device Architecture CUDA-enabled GPUs. All of the three alignment types are based on dynamic programming and share almost the same computational pattern. Thus, we have investigated a general tile-based approach to facilitating fast alignment by deeply exploring the powerful compute capability of CUDA-enabled GPUs. The performance of GSWABE has been evaluated on a Kepler-based Tesla K40 GPU using a variety of short DNA sequence datasets. The results show that our algorithm can yield a performance of up to 59.1 billions cell updates per second GCUPS, 58.5 GCUPS and 50.3 GCUPS for global, semi-global and local alignment, respectively. Furthermore, on the same system GSWABE runs up to 156.0 times faster than the Streaming SIMD Extensions SSE-based SSW library and up to 102.4 times faster than the CUDA-based MSA-CUDA the first stage in terms of local alignment. Compared with the CUDA-based gpu-pairAlign, GSWABE demonstrates stable and consistent speedups with a maximum speedup of 11.2, 10.7, and 10.6 for global, semi-global, and local alignment, respectively. Copyright © 2014 John Wiley & Sons, Ltd.
Emerging Trends in Design and Applications of Memory-Based Computing and Content-Addressable Memories Content-addressable memory (CAM) and associative memory (AM) are types of storage structures that allow searching by content as opposed to searching by address. Such memory structures are used in diverse applications ranging from branch prediction in a processor to complex pattern recognition. In this paper, we review the emerging challenges and opportunities in implementing different varieties of...
FindeR: Accelerating FM-Index-Based Exact Pattern Matching in Genomic Sequences through ReRAM Technology Genomics is the critical key to enabling precision medicine, ensuring global food security and enforcing wildlife conservation. The massive genomic data produced by various genome sequencing technologies presents a significant challenge for genome analysis. Because of errors from sequencing machines and genetic variations, approximate pattern matching (APM) is a must for practical genome analysis. Recent work proposes FPGA, ASIC and even process-in-memory-based accelerators to boost the APM throughput by accelerating dynamic-programming-based algorithms (e.g., Smith-Waterman). However, existing accelerators lack the efficient hardware acceleration for the exact pattern matching (EPM) that is an even more critical and essential function widely used in almost every step of genome analysis including assembly, alignment, annotation and compression. State-of-the-art genome analysis adopts the FM-Index that augments the space-efficient BWT with additional data structures permitting fast EPM operations. But the FM-Index is notorious for poor spatial locality and massive random memory accesses. In this paper, we propose a ReRAM-based process-in-memory architecture, FindeR, to enhance the FM-Index EPM search throughput in genomic sequences. We build a reliable and energy-efficient Hamming distance unit to accelerate the computing kernel of FM-Index search using commodity ReRAM chips without introducing extra CMOS logic. We further architect a full-fledged FM-Index search pipeline and improve its search throughput by lightweight scheduling on the NVDIMM. We also create a system library for programmers to invoke FindeR to perform EPMs in genome analysis. Compared to state-of-the-art accelerators, FindeR improves the FM-Index search throughput by 83% ~ 30K× and throughput per Watt by 3.5×~42.5K×.
GateKeeper-GPU: Fast and Accurate Pre-Alignment Filtering in Short Read Mapping We introduce GateKeeper-GPU, a fast and accurate pre-alignment filter that efficiently reduces the need for expensive sequence alignment. GateKeeper-GPU improves the filtering accuracy of GateKeeper, and by exploiting the massive parallelism provided by GPU threads it concurrently examines numerous sequence pairs rapidly. GateKeeper-GPU is available at https://github.com/BilkentCompGen/GateKeeper-...
An FPGA Implementation of A Portable DNA Sequencing Device Based on RISC-V Miniature and mobile DNA sequencers are steadily growing in popularity as effective tools for genetics research. As basecalling algorithms continue to evolve, basecalling poses a serious challenge for small computing devices despite its increasing accuracy. Although general-purpose computing chips such as CPUs and GPUs can achieve fast results, they are not energy efficient enough for mobile applications. This paper presents an innovative solution, a basecalling hardware architecture based on RISC-V ISA, and after validation with our custom FPGA verification platform, it demonstrates a 1.95x energy efficiency ratio compared to x86. There is also a 38% improvement in energy efficiency ratio compared to ARM. In addition, this study also completes the verification work for subsequent ASIC designs.
Accelerating read mapping with FastHASH. With the introduction of next-generation sequencing (NGS) technologies, we are facing an exponential increase in the amount of genomic sequence data. The success of all medical and genetic applications of next-generation sequencing critically depends on the existence of computational techniques that can process and analyze the enormous amount of sequence data quickly and accurately. Unfortunately, the current read mapping algorithms have difficulties in coping with the massive amounts of data generated by NGS.We propose a new algorithm, FastHASH, which drastically improves the performance of the seed-and-extend type hash table based read mapping algorithms, while maintaining the high sensitivity and comprehensiveness of such methods. FastHASH is a generic algorithm compatible with all seed-and-extend class read mapping algorithms. It introduces two main techniques, namely Adjacency Filtering, and Cheap K-mer Selection.We implemented FastHASH and merged it into the codebase of the popular read mapping program, mrFAST. Depending on the edit distance cutoffs, we observed up to 19-fold speedup while still maintaining 100% sensitivity and high comprehensiveness.
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
The Transitive Reduction of a Directed Graph
A new concept for wireless reconfigurable receivers In this article we present the Self-Adaptive Universal Receiver (SAUR), a novel wireless reconfigurable receiver architecture. This scheme is based on blind recognition of the system in use, operating on a new radio interface comprising two functional phases. The first phase performs a wideband analysis (WBA) on the received signal to determine its standard. The second phase corresponds to demodulation. Here we only focus on the WBA phase, which consists of an iterative process to find the bandwidth compatible with the associated signal processing techniques. The blind standard recognition performed in the last iteration step of this process uses radial basis function neural networks. This allows a strong analogy between our approach and conventional pattern recognition problems. The efficiency of this type of blind recognition is illustrated with the results of extensive simulations performed in our laboratory using true data of received signals.
Fpga Implementation Of High-Frequency Software Radio Receiver State-of-the-art analog-to-digital converters allow the design of high-frequency software radio receivers that use baseband signal processing. However, such receivers are rarely considered in literature. In this paper, we describe the design of a high-performance receiver operating at high frequencies, whose digital part is entirely implemented in an FPGA device. The design of digital subsystem is given, together with the design of a low-cost analog front end.
A Hybrid Dynamic Load Balancing Algorithm For Distributed Systems Using Genetic Algorithms Dynamic Load Balancing (DLB) is sine qua non in modern distributed systems to ensure the efficient utilization of computing resources therein. This paper proposes a novel framework for hybrid dynamic load balancing. Its framework uses a Genetic Algorithms (GA) based supernode selection approach within. The GA-based approach is useful in choosing optimally loaded nodes as the supernodes directly from data set, thereby essentially improving the speed of load balancing process. Applying the proposed GA-based approach, this work analyzes the performance of hybrid DLB algorithm under different system states such as lightly loaded, moderately loaded, and highly loaded. The performance is measured with respect to three parameters: average response time, average round trip time, and average completion time of the users. Further, it also evaluates the performance of hybrid algorithm utilizing OnLine Transaction Processing (OLTP) benchmark and Sparse Matrix Vector Multiplication (SPMV) benchmark applications to analyze its adaptability to I/O-intensive, memory-intensive, or/and CPU-intensive applications. The experimental results show that the hybrid algorithm significantly improves the performance under different system states and under a wide range of workloads compared to traditional decentralized algorithm.
OMNI: A Framework for Integrating Hardware and Software Optimizations for Sparse CNNs Convolution neural networks (CNNs) as one of today’s main flavor of deep learning techniques dominate in various image recognition tasks. As the model size of modern CNNs continues to grow, neural network compression techniques have been proposed to prune the redundant neurons and synapses. However, prior techniques disconnect the software neural networks compression and hardware acceleration, whi...
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
On stochastic gradient and subgradient methods with adaptive steplength sequences Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochastic optimization problems. However, the performance of standard SA implementations can vary significantly based on the choice of the steplength sequence, and in general, little guidance is provided about good choices. Motivated by this gap, we present two adaptive steplength schemes for strongly convex differentiable stochastic optimization problems, equipped with convergence theory, that aim to overcome some of the reliance on user-specific parameters. The first scheme, referred to as a recursive steplength stochastic approximation (RSA) scheme, optimizes the error bounds to derive a rule that expresses the steplength at a given iteration as a simple function of the steplength at the previous iteration and certain problem parameters. The second scheme, termed as a cascading steplength stochastic approximation (CSA) scheme, maintains the steplength sequence as a piecewise-constant decreasing function with the reduction in the steplength occurring when a suitable error threshold is met. Then, we allow for nondifferentiable objectives but with bounded subgradients over a certain domain. In such a regime, we propose a local smoothing technique, based on random local perturbations of the objective function, that leads to a differentiable approximation of the function. Assuming a uniform distribution on the local randomness, we establish a Lipschitzian property for the gradient of the approximation and prove that the obtained Lipschitz bound grows at a modest rate with problem size. This facilitates the development of an adaptive steplength stochastic approximation framework, which now requires sampling in the product space of the original measure and the artificially introduced distribution.
Randomized Gradient-Free Method for Multiagent Optimization Over Time-Varying Networks. In this brief, we consider the multiagent optimization over a network where multiple agents try to minimize a sum of nonsmooth but Lipschitz continuous functions, subject to a convex state constraint set. The underlying network topology is modeled as time varying. We propose a randomized derivative-free method, where in each update, the random gradient-free oracles are utilized instead of the subg...
Distributed mirror descent method for multi-agent optimization with delay. This paper investigates a distributed optimization problem associated a time-varying multi-agent network with the presence of delays, where each agent has local access to its convex objective function, and cooperatively minimizes a sum of convex objective functions of the agents over the network. Based on the mirror descent method, we develop a distributed algorithm to solve this problem by exploring the delayed gradient information. Furthermore, we analyze the effects of delayed gradients on the convergence of the algorithm and provide an explicit bound on the convergence rate as a function of the delay parameter, the network size and topology. Our results show that the delays are asymptotically negligible for smooth problems. The proposed algorithm can be viewed as a generalization of the distributed gradient-based projection methods since it utilizes a customized Bregman divergence instead of the usual Euclidean squared distance. Finally, some simulation results on a logistic regression problem are presented to demonstrate the effectiveness of the algorithm.
Gradient-free method for nonsmooth distributed optimization In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov's random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to $$d$$ d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.
Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization. We introduce a new framework for the convergence analysis of a class of distributed constrained non-convex optimization algorithms in multi-agent systems. The aim is to search for local minimizers of a non-convex objective function which is supposed to be a sum of local utility functions of the agents. The algorithm under study consists of two steps: a local stochastic gradient descent at each agent and a gossip step that drives the network of agents to a consensus. Under the assumption of decreasing stepsize, it is proved that consensus is asymptotically achieved in the network and that the algorithm converges to the set of Karush-Kuhn-Tucker points. As an important feature, the algorithm does not require the double-stochasticity of the gossip matrices. It is in particular suitable for use in a natural broadcast scenario for which no feedback messages between agents are required. It is proved that our results also holds if the number of communications in the network per unit of time vanishes at moderate speed as time increases, allowing potential savings of the network's energy. Applications to power allocation in wireless ad-hoc networks are discussed. Finally, we provide numerical results which sustain our claims.
Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multi-agent coordination, estimation in sensor networks, and large-scale machine learning. We develop and analyze distributed algorithms based on dual subgradient averaging, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our analysis allows us to clearly separate the convergence of the optimization algorithm itself and the effects of communication dependent on the network structure. We show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network, and confirm this prediction's sharpness both by theoretical lower bounds and simulations for various networks. Our approach includes the cases of deterministic optimization and communication, as well as problems with stochastic optimization and/or communication.
Bayesian learning in social networks We extend the standard model of social learning in two ways. First, we introduce a social network and assume that agents can only observe the actions of agents to whom they are connected by this network. Secondly, we allow agents to choose a different action at each date. If the network satisfies a connectedness assumption, the initial diversity resulting from diverse private information is eventually replaced by uniformity of actions, though not necessarily of beliefs, in finite time with probability one. We look at particular networks to illustrate the impact of network architecture on speed of convergence and the optimality of absorbing states. Convergence is remarkably rapid, so that asymptotic results are a good approximation even in the medium run.
Ad-hoc On-Demand Distance Vector Routing This paper describes work carried out as part of the GUIDE project at Lancaster University. The overall aim of the project is to develop a context-sensitive tourist guide for visitors to the city of Lancaster. Visitors are equipped with portable GUIDE ...
The Influence of the Sigmoid Function Parameters on the Speed of Backpropagation Learning Sigmoid function is the most commonly known function used in feed forward neural networks because of its nonlinearity and the computational simplicity of its derivative. In this paper we discuss a variant sigmoid function with three parameters that denote the dynamic range, symmetry and slope of the function respectively. We illustrate how these parameters influence the speed of backpropagation learning and introduce a hybrid sigmoidal network with different parameter configuration in different layers. By regulating and modifying the sigmoid function parameter configuration in different layers the error signal problem, oscillation problem and asymmetrical input problem can be reduced. To compare the learning capabilities and the learning rate of the hybrid sigmoidal networks with the conventional networks we have tested the two-spirals benchmark that is known to be a very difficult task for backpropagation and their relatives.
Analysis of First-Order Anti-Aliasing Integration Sampler Performance of the first-order anti-aliasing integration sampler used in software-defined radio (SDR) receivers is analyzed versus all practical nonidealities. The nonidealities that are considered in this paper are transconductor finite output resistance, switch resistance, nonzero rise and fall times of the sampling clock, charge injection, clock jitter, and noise. It is proved that the filter i...
Cache Games -- Bringing Access-Based Cache Attacks on AES to Practice Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process.
An Opportunistic Cognitive MAC Protocol for Coexistence with WLAN In last decades, the demand of wireless spectrum has increased rapidly with the development of mobile communication services. Recent studies recognize that traditional fixed spectrum assignment does not use spectrum efficiently. Such a wasting phenomenon could be amended after the present of cognitive radio. Cognitive radio is a new type of technology that enables secondary usage to unlicensed user. This paper presents an opportunistic cognitive MAC protocol (OC-MAC) for cognitive radios to access unoccupied resource opportunistically and coexist with wireless local area network (WLAN). By a primary traffic predication model and transmission etiquette, OC-MAC avoids producing fatal damage to licensed users. Then a ns2 simulation model is developed to evaluate its performance in scenarios with coexisting WLAN and cognitive network.
ΣΔ ADC with fractional sample rate conversion for software defined radio receiver.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.122385
0.068889
0.068889
0.026602
0.022963
0.006709
0.000741
0
0
0
0
0
0
0
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
Evaluation of the Broadcast Operation in Kademlia Several proposals exist that try to enhance Distributed Hash Table (DHT) systems with broadcasting capabilities. None of them however specifically addresses the particularities of Kademlia, an important DHT, used in well known real applications. Our work analyzes the implications of Kademlia's use of XOR-based distance metrics and subsequently discusses the applicability of existing broadcasting proposals to it. Based on this, several algorithms for broadcasting in Kademlia have been implemented and experimentally evaluated under different conditions of churn and failure rate. All significant assessment criteria have been considered: node coverage, messages to nodes ratio, latency and imbalance factor. Since no perfect solution exists, a discussion on the choices and compromises to make depending on system characteristics or application priorities is presented. In addition, several enhancements are proposed that profit from Kademlia characteristics in order to make the broadcasting more robust against stale routing information or malfunctioning nodes.
Flat and hierarchical epidemics in P2P systems: Energy cost models and analysis In large scale distributed systems, epidemic or gossip-based communication mechanisms are preferred for their ease of deployment, simplicity, robustness against failures, load-balancing and limited resource usage. Although they have extensive applicability, there is no prior work on developing energy cost models for epidemic distributed mechanisms. In this study, we address power awareness features of two main groups of epidemics, namely flat and hierarchical. We propose a dominating-set based and power-aware hierarchical epidemic approach that eliminates a significant number of peers from gossiping. To the best of our knowledge, using a dominating set to build a hierarchy for epidemic communication and provide energy efficiency in P2P systems is a novel approach. We develop energy cost model formulations for flat and hierarchical epidemics. In contrast to the prior works, our study is the first one that proposes energy cost models for generic peers using epidemic communication, and examines the effect of protocol parameters to characterize energy consumption. As a case study protocol, we use our epidemic protocol ProFID for frequent items discovery in P2P systems. By means of extensive large scale simulations on PeerSim, we analyze the effect of protocol parameters on energy consumption, compare flat and hierarchical epidemic approaches for efficiency, scalability, and applicability as well as investigate their resilience under realistic churn.
Locality Aware Skip Graph Skip Graph, as a distributed hash table (DHT) based data structure, plays a key role in peer-to-peer (P2P) storage systems, distributed online social networks, search engines, and several DHT-based applications. In the Skip Graph structure, node identifiers define the connectivity. However, traditional identifier assignment algorithms do not consider the Skip Graph nodes' locations. Neglecting the nodes' localities in the identifier assignments results in high end-to-end latency in the overlay network which negatively affects the overall system performance. In this paper, we propose a method to assign the Skip Graph node identifiers considering their location information and make the nodes locality aware. In the proposed dynamic and fully decentralized algorithm, named DPAD, instead of assigning node identifiers uniformly at random, locality aware identifiers will be assigned to the nodes at their arrival to the system based on their distances to some super-nodes named landmarks. We define locality awareness as the similarity of the distances between the nodes in the overlay and underlay networks. Performance analysis results show that DPAD algorithm provides about 82% improvement in the locality awareness of node identifiers and about 40% improvement in the search query end-to-end latency, compared to the best known static and dynamic algorithms.
Tiara: A Self-stabilizing Deterministic Skip List We present Tiara -- a self-stabilizing peer-to-peer network maintenance algorithm. Tiara is truly deterministic which allows it to achieve exact performance bounds. Tiara allows logarithmic searches and topology updates. It is based on a novel sparse 0-1 skip list . We rigorously prove the algorithm correct in the shared register model. We then describe its extension to a ring and incorporation of crash tolerance.
Modeling Churn in P2P Networks The objective of this paper is to introduce a model to guide the analysis of the impact of churn in P2P networks. Using this model, a variety of node membership scenarios is created. These scenarios are used to capture and analyze the performance trends of Chord, a distributed hash table (DHT) based resource lookup protocol for Peer-to-peer overlay networks. The performance study focuses both on the performance of routing and content retrieval. This study also identifies the limitations of various churn-alleviating mechanisms, frequently proposed in the literature. The study highlights the importance of the content nature and access pattern on the performance of P2P, DHT-based overlay networks. The results show that the type of content being accessed and the way the content is accessed has a significant impact on the performance of P2P networks.
Why Do Internet Services Fail, and What Can Be Done About It? We describe the architecture, operational practices, and failure characteristics of three very large-scale Internet services. Our research on architecture and operational practices took the form of interviews with architects and operations staff at those (and several other) services. Our research on component and service failure took the form of examining the operations problem tracking databases from two of the services and a log of service failure post-mortem reports from the third. Architecturally, we find convergence on a common structure: division of nodes into service front-ends and back-ends, multiple levels of redundancy and load-balancing, and use of custom-written software for both production services and administrative tools. Operationally, we find a thin line between service developers and operators, and a need to coordinate problem detection and repair across administrative domains. With respect to failures, we find that operator errors are their primary cause, operator error is the most difficult type of failure to mask, service front-ends are responsible for more problems than service back-ends but fewer minutes of unavailability, and that online testing and more thoroughly exposing and detecting component failures could reduce system failure rates for at least one service.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Design Techniques for Fully Integrated Switched-Capacitor DC-DC Converters. This paper describes design techniques to maximize the efficiency and power density of fully integrated switched-capacitor (SC) DC-DC converters. Circuit design methods are proposed to enable simplified gate drivers while supporting multiple topologies (and hence output voltages). These methods are verified by a proof-of-concept converter prototype implemented in 0.374 mm2 of a 32 nm SOI process. ...
Mapping irregular applications to DIVA, a PIM-based data-intensive architecture
Constrained Consensus and Optimization in Multi-Agent Networks We present distributed algorithms that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity. Our framework is general in that this value can represent a consensus value among multiple agents or an optimal solution of an optimization problem, where the global objective function is a combination of local agent objective functions. Our main focus is on constrained problems where the estimate of each agent is restricted to lie in a different constraint set. To highlight the effects of constraints, we first consider a constrained consensus problem and present a distributed ``projected consensus algorithm'' in which agents combine their local averaging operation with projection on their individual constraint sets. This algorithm can be viewed as a version of an alternating projection method with weights that are varying over time and across agents. We establish convergence and convergence rate results for the projected consensus algorithm. We next study a constrained optimization problem for optimizing the sum of local objective functions of the agents subject to the intersection of their local constraint sets. We present a distributed ``projected subgradient algorithm'' which involves each agent performing a local averaging operation, taking a subgradient step to minimize its own objective function, and projecting on its constraint set. We show that, with an appropriately selected stepsize rule, the agent estimates generated by this algorithm converge to the same optimal solution for the cases when the weights are constant and equal, and when the weights are time-varying but all agents have the same constraint set.
Architectural Evolution of Integrated M-Phase High-Q Bandpass Filters -phase bandpass filters (BPFs) are analyzed, and variations of the structure are proposed. For values of that are integer multiples of 4, the conventional -phase BPF structure is modified to take complex baseband impedances and frequency-translate their complex impedance response to the local oscillator frequency. Also, it is demonstrated how the -phase BPF can be modified to implement a high quality factor (Q) image-rejection BPF with quadrature RF inputs. In addition, we present high-Q BPFs whose center frequencies are equal to the sum or difference of the RF and IF (intermediate frequency) clocks. Such filters can be useful in heterodyne receiver architectures.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.053949
0.061143
0.061143
0.03685
0.030571
0.010846
0.007596
0.000816
0
0
0
0
0
0
Data-Driven Pricing Strategy for Demand-Side Resource Aggregators We consider a utility who seeks to coordinate the energy consumption of multiple demand-side flexible resource aggregators. For the purpose of privacy protection, the utility has no access to the detailed information of loads of resource aggregators. Instead, we assume that the utility can directly observe each aggregator’s aggregate energy consumption outcomes. Furthermore, the utility can leverage resource aggregator energy consumption via time-varying electricity price profiles. Based on inverse optimization technique, we propose an estimation method for the utility to infer the energy requirement information of aggregators. Subsequently, we design a data-driven pricing scheme to help the utility achieve system-level control objectives (e.g., minimizing peak demand) by combining hybrid particle swarm optimizer with mutation (HPSOM) algorithm and an iterative algorithm. Case studies have demonstrated the effectiveness of the proposed approach against two benchmark pricing strategies – a flat-rate scheme and a time-of-use (TOU) scheme.
Estimation of entropy and mutual information We present some new results on the nonparametric estimation of entropy and mutual information. First, we use an exact local expansion of the entropy function to prove almost sure consistency and central limit theorems for three of the most commonly used discretized information estimators. The setup is related to Grenander's method of sieves and places no assumptions on the underlying probability measure generating the data. Second, we prove a converse to these consistency theorems, demonstrating that a misapplication of the most common estimation techniques leads to an arbitrarily poor estimate of the true information, even given unlimited data. This "inconsistency" theorem leads to an analytical approximation of the bias, valid in surprisingly small sample regimes and more accurate than the usual 1/N formula of Miller and Madow over a large region of parameter space. The two most practical implications of these results are negative: (1) information estimates in a certain data regime are likely contaminated by bias, even if "bias-corrected" estimators are used, and (2) confidence intervals calculated by standard techniques drastically underestimate the error of the most common estimation methods.Finally, we note a very useful connection between the bias of entropy estimators and a certain polynomial approximation problem. By casting bias calculation problems in this approximation theory framework, we obtain the best possible generalization of known asymptotic bias results. More interesting, this framework leads to an estimator with some nice properties: the estimator comes equipped with rigorous bounds on the maximum error over all possible underlying probability distributions, and this maximum error turns out to be surprisingly small. We demonstrate the application of this new estimator on both real and simulated data.
Subspace pursuit for compressive sensing signal reconstruction We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques when applied to very sparse signals, and reconstruction accuracy of the same order as that of linear programming (LP) optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean-squared error of the reconstruction is upper-bounded by constant multiples of the measurement and signal perturbation energies.
Household Electricity Demand Forecast Based on Context Information and User Daily Schedule Analysis From Meter Data The very short-term load forecasting (VSTLF) problem is of particular interest for use in smart grid and automated demand response applications. An effective solution for VSTLF can facilitate real-time electricity deployment and improve its quality. In this paper, a novel approach to model the very short-term load of individual households based on context information and daily schedule pattern analysis is proposed. Several daily behavior pattern types were obtained by analyzing the time series of daily electricity consumption, and context features from various sources were collected and used to establish a rule set for use in anticipating the likely behavior pattern type of a specific day. Meanwhile, an electricity consumption volume prediction model was developed for each behavior pattern type to predict the load at a specific time point in a day. This study was concerned with solving the VSTLF for individual households in Taiwan. The proposed approach obtained an average mean absolute percentage error (MAPE) of 3.23% and 2.44% for forecasting individual household load and aggregation load 30-min ahead, respectively, which is more favorable than other methods.
Grid Influenced Peer-to-Peer Energy Trading This paper proposes a peer-to-peer (P2P) energy trading scheme that can help a centralized power system to reduce the total electricity demand of its customers at the peak hour. To do so, a cooperative Stackelberg game is formulated, in which the centralized power system acts as the leader that needs to decide on a price at the peak demand period to incentivize prosumers to not seek any energy from it. The prosumers, on the other hand, act as followers and respond to the leader’s decision by forming suitable coalitions with neighboring prosumers in order to participate in P2P energy trading to meet their energy demand. The properties of the proposed Stackelberg game are studied. It is shown that the game has a unique and stable Stackelberg equilibrium, as a result of the stability of prosumers’ coalitions. At the equilibrium, the leader chooses its strategy using a derived closed-form expression, while the prosumers choose their equilibrium coalition structure. An algorithm is proposed that enables the centralized power system and the prosumers to reach the equilibrium solution. Numerical case studies demonstrate the beneficial properties of the proposed scheme.
Multi-Agent Based Transactive Energy Management Systems for Residential Buildings with Distributed Energy Resources Proper management of building loads and distributed energy resources (DER) can offer grid assistance services in transactive energy (TE) frameworks besides providing cost savings for the consumer. However, most TE models require building loads and DER units to be managed by external entities (e.g., aggregators), and in some cases, consumers need to provide critical information related to their ele...
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
A Low-Power Fast-Transient 90-nm Low-Dropout Regulator With Multiple Small-Gain Stages A power-efficient 90-nm low-dropout regulator (LDO) with multiple small-gain stages is proposed in this paper. The proposed channel-resistance-insensitive small-gain stages provide loop gain enhancements without introducing low-frequency poles before the unity-gain frequency (UGF). As a result, both the loop gain and bandwidth of the LDO are improved, so that the accuracy and response speed of voltage regulation are significantly enhanced. As no on-chip compensation capacitor is required, the active chip area of the LDO is only 72.5 μm × 37.8 μm. Experimental results show that the LDO is capable of providing an output of 0.9 V with maximum output current of 50 mA from a 1-V supply. The LDO has a quiescent current of 9.3 μA, and has significantly improvement in line and load transient responses as well as performance in power-supply rejection ratio (PSRR).
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones Today’s smartphone operating systems frequently fail to provide users with visibility into how third-party applications collect and share their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid enables realtime analysis by leveraging Android’s virtualized execution environment. TaintDroid incurs only 32&percnt; performance overhead on a CPU-bound microbenchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, in our 2010 study we found 20 applications potentially misused users’ private information; so did a similar fraction of the tested applications in our 2012 study. Monitoring the flow of privacy-sensitive data with TaintDroid provides valuable input for smartphone users and security service firms seeking to identify misbehaving applications.
An almost necessary and sufficient condition for robust stability of closed-loop systems with disturbance observer The disturbance observer (DOB)-based controller has been widely employed in industrial applications due to its powerful ability to reject disturbances and compensate plant uncertainties. In spite of various successful applications, no necessary and sufficient condition for robust stability of the closed loop systems with the DOB has been reported in the literature. In this paper, we present an almost necessary and sufficient condition for robust stability when the Q-filter has a sufficiently small time constant. The proposed condition indicates that robust stabilization can be achieved against arbitrarily large (but bounded) uncertain parameters, provided that an outer-loop controller stabilizes the nominal system, and uncertain plant is of minimum phase.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
An Efficient Hardware Accelerator for Structured Sparse Convolutional Neural Networks on FPGAs Deep convolutional neural networks (CNNs) have achieved state-of-the-art performance in a wide range of applications. However, deeper CNN models, which are usually computation consuming, are widely required for complex artificial intelligence (AI) tasks. Though recent research progress on network compression, such as pruning, has emerged as a promising direction to mitigate computational burden, existing accelerators are still prevented from completely utilizing the benefits of leveraging sparsity due to the irregularity caused by pruning. On the other hand, field-programmable gate arrays (FPGAs) have been regarded as a promising hardware platform for CNN inference acceleration. However, most existing FPGA accelerators focus on dense CNN and cannot address the irregularity problem. In this article, we propose a sparsewise dataflow to skip the cycles of processing multiply-and-accumulates (MACs) with zero weights and exploit data statistics to minimize energy through zeros gating to avoid unnecessary computations. The proposed sparsewise dataflow leads to a low bandwidth requirement and high data sharing. Then, we design an FPGA accelerator containing a vector generator module (VGM) that can match the index between sparse weights and input activations according to the proposed dataflow. Experimental results demonstrate that our implementation can achieve 987-, 46-, and 57-imag/s performance for AlexNet, VGG-16, and ResNet-50 on Xilinx ZCU102, respectively, which provides 1.5×-6.7× speedup and 2.0×-6.0× energy efficiency over previous CNN FPGA accelerators.
An Energy-Efficient FPGA-Based Deconvolutional Neural Networks Accelerator for Single Image Super-Resolution Convolutional neural networks (CNNs) demonstrate excellent performance in various computer vision applications. In recent years, FPGA-based CNN accelerators have been proposed for optimizing performance and power efficiency. Most accelerators are designed for object detection and recognition algorithms that are performed on low-resolution (LR) images. However, real-time image super-resolution (SR) cannot be implemented on a typical accelerator because of the long execution cycles required to generate high-resolution (HR) images, such as those used in ultra-high-definition (UHD) systems. In this paper, we propose a novel CNN accelerator with efficient parallelization methods for SR applications. First, we propose a new methodology for optimizing the deconvolutional neural networks (DCNNs) used for increasing feature maps. Secondly, we propose a novel method to optimize CNN dataflow so that the SR algorithm can be driven at low power in display applications. Finally, we quantize and compress a DCNN-based SR algorithm into an optimal model for efficient inference using on-chip memory. We present an energyefficient architecture for SR and validate our architecture on a mobile panel with quad-high-definition (QHD) resolution. Our experimental results show that, with the same hardware resources, the proposed DCNN accelerator achieves a throughput up to 108 times greater than that of a conventional DCNN accelerator. In addition, our SR system achieves an energy efficiency of 144.9 GOPS/W, 293.0 GOPS/W, and 500.2 GOPS/W at SR scale factors of 2, 3, and 4, respectively. Furthermore, we demonstrate that our system can restore HR images to a high quality while greatly reducing the data bit-width and the number of parameters compared to conventional SR algorithms.
A High-Throughput and Power-Efficient FPGA Implementation of YOLO CNN for Object Detection Convolutional neural networks (CNNs) require numerous computations and external memory accesses. Frequent accesses to off-chip memory cause slow processing and large power dissipation. For real-time object detection with high throughput and power efficiency, this paper presents a Tera-OPS streaming hardware accelerator implementing a you-only-look-once (YOLO) CNN. The parameters of the YOLO CNN are retrained and quantized with the PASCAL VOC data set using binary weight and flexible low-bit activation. The binary weight enables storing the entire network model in block RAMs of a field-programmable gate array (FPGA) to reduce off-chip accesses aggressively and, thereby, achieve significant performance enhancement. In the proposed design, all convolutional layers are fully pipelined for enhanced hardware utilization. The input image is delivered to the accelerator line-by-line. Similarly, the output from the previous layer is transmitted to the next layer line-by-line. The intermediate data are fully reused across layers, thereby eliminating external memory accesses. The decreased dynamic random access memory (DRAM) accesses reduce DRAM power consumption. Furthermore, as the convolutional layers are fully parameterized, it is easy to scale up the network. In this streaming design, each convolution layer is mapped to a dedicated hardware block. Therefore, it outperforms the “one-size-fits-all” designs in both performance and power efficiency. This CNN implemented using VC707 FPGA achieves a throughput of 1.877 tera operations per second (TOPS) at 200 MHz with batch processing while consuming 18.29 W of on-chip power, which shows the best power efficiency compared with the previous research. As for object detection accuracy, it achieves a mean average precision (mAP) of 64.16% for the PASCAL VOC 2007 data set that is only 2.63% lower than the mAP of the same YOLO network with full precision.
Accelerating Transformer-based Deep Learning Models on FPGAs using Column Balanced Block Pruning Although Transformer-based language representations achieve state-of-the-art accuracy on various natural language processing (NLP) tasks, the large model size has been challenging the resource constrained computing platforms. Weight pruning, as a popular and effective technique in reducing the number of weight parameters and accelerating the Transformer, has been investigated on GPUs. However, the...
EdgeBERT: Sentence-Level Energy Optimizations for Latency-Aware Multi-Task NLP Inference ABSTRACT Transformer-based language models such as BERT provide significant accuracy improvement to a multitude of natural language processing (NLP) tasks. However, their hefty computational and memory demands make them challenging to deploy to resource-constrained edge platforms with strict latency requirements. We present EdgeBERT, an in-depth algorithm-hardware co-design for latency-aware energy optimizations for multi-task NLP. EdgeBERT employs entropy-based early exit predication in order to perform dynamic voltage-frequency scaling (DVFS), at a sentence granularity, for minimal energy consumption while adhering to a prescribed target latency. Computation and memory footprint overheads are further alleviated by employing a calibrated combination of adaptive attention span, selective network pruning, and floating-point quantization. Furthermore, in order to maximize the synergistic benefits of these algorithms in always-on and intermediate edge computing settings, we specialize a 12nm scalable hardware accelerator system, integrating a fast-switching low-dropout voltage regulator (LDO), an all-digital phase-locked loop (ADPLL), as well as, high-density embedded non-volatile memories (eNVMs) wherein the sparse floating-point bit encodings of the shared multi-task parameters are carefully stored. Altogether, latency-aware multi-task NLP inference acceleration on the EdgeBERT hardware system generates up to 7 ×, 2.5 ×, and 53 × lower energy compared to the conventional inference without early stopping, the latency-unbounded early exit approach, and CUDA adaptations on an Nvidia Jetson Tegra X2 mobile GPU, respectively.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Chains of recurrences—a method to expedite the evaluation of closed-form functions Chains of Recurrences (CR's) are introduced as an effective method to evaluate functions at regular intervals. Algebraic properties of CR's are examined and an algorithm that constructs a CR for a given function is explained. Finally, an implementation of the method in MAXIMA/Common Lisp is discussed.
Consensus problems in networks of agents with switching topology and time-delays. In this paper, we discuss consensus problems for a network of dynamic agents with flxed and switching topologies. We analyze three cases: i) networks with switching topology and no time-delays, ii) networks with flxed topology and communication time-delays, and iii) max-consensus problems (or leader determination) for groups of discrete-time agents. In each case, we introduce a linear/nonlinear consensus protocol and provide convergence analysis for the proposed distributed algorithm. Moreover, we establish a connection between the Fiedler eigenvalue of the information ∞ow in a network (i.e. algebraic connectivity of the network) and the negotiation speed (or performance) of the corresponding agreement protocol. It turns out that balanced digraphs play an important role in addressing average-consensus problems. We intro- duce disagreement functions that play the role of Lyapunov functions in convergence analysis of consensus protocols. A distinctive feature of this work is to address consen- sus problems for networks with directed information ∞ow. We provide analytical tools that rely on algebraic graph theory, matrix theory, and control theory. Simulations are provided that demonstrate the efiectiveness of our theoretical results.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
Linear Amplification with Nonlinear Components A technique for producing bandpass linear amplification with nonlinear components (LINC) is described. The bandpass signal first is separated into two constant envelope component signals. All of the amplitude and phase information of the original bandpass signal is contained in phase modulation on the component signals. These constant envelope signals can be amplified or translated in frequency by amplifiers or mixers which have nonlinear input-output amplitude transfer characteristics. Passive linear combining of the amplified and/or translated component signals produces an amplified and/or translated replica of the original signal.
Opportunistic Information Dissemination in Mobile Ad-hoc Networks: The Profit of Global Synchrony The topic of this paper is the study of Information Dissemination in Mobile Ad-hoc Networks by means of deterministic protocols. We characterize the connectivity resulting from the movement, from failures and from the fact that nodes may join the computation at different times with two values, � and �, so that, withintime slots, some node that has the information must be connected to some node without it for at leasttime slots. The protocols studied are clas- sified into three classes: oblivious (the transmission schedule of a node is only a function of its ID), quasi-oblivious (the transmission schedule may also depend on a global time), and adaptive. The main contribution of this work concerns negative results. Contrasting the lower and upper bounds derived, interesting complexity gaps among protocol- classes are observed. More precisely, in order to guarantee any progress towards solving the problem, it is shown thatmust be at least n 1 in general, but that � 2 (n 2 /log n) if an oblivious protocol is used. Since quasi-oblivious protocols can guarantee progress with � 2 O(n), this represents a significant gap, almost linear in �, between oblivious and quasi-oblivious protocols. Regarding the time to complete the dissemination, a lower bound of (n� + n 3 /log n) is proved for oblivious protocols, which is tight up to a polylogarithmic factor because a constructive O(n� + n 3 log n) upper bound exists for the same class. It is also proved that adaptive protocols require (n� + n 2 ), which is optimal given that a matching upper bound can be proved for quasi-oblivious protocols. These results show that the gap in time complexity between oblivious and quasi- oblivious, and hence adaptive, protocols is almost linear. This gap is what we call the profit of global synchrony, since it represents the gain the network obtains from global synchrony with respect to not having it.
Implementation of LTE SC-FDMA on the USRP2 software defined radio platform In this paper we discuss the implementation of a Single Carrier Frequency Division Multiple Access (SC-FDMA) transceiver running over the Universal Software Radio Peripheral 2 (USRP2). SC-FDMA is the air interface which has been selected for the uplink in the latest Long Term Evolution (LTE) standard. In this paper we derive an AWGN channel model for SC-FDMA transmission, which is useful for benchmarking experimental results. In our implementation, we deal with signal scaling, equalization and partial synchronization to realize SC-FDMA transmission over a noisy channel at rates up to 5.184 Mbit/s. Experimental results on the Bit Error Rate (BER) versus Signal-to-Noise Ratio (SNR) are presented and compared to theoretical and simulated performance.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.1
0.1
0.1
0.05
0.033333
0.005556
0
0
0
0
0
0
0
0
Asynchronous Broadcast-Based Convex Optimization Over a Network. We consider a distributed multi-agent network system where each agent has its own convex objective function, which can be evaluated with stochastic errors. The problem consists of minimizing the sum of the agent functions over a commonly known constraint set, but without a central coordinator and without agents sharing the explicit form of their objectives. We propose an asynchronous broadcast-based algorithm where the communications over the network are subject to random link failures. We investigate the convergence properties of the algorithm for a diminishing (random) stepsize and a constant stepsize, where each agent chooses its own stepsize independently of the other agents. Under some standard conditions on the gradient errors, we establish almost sure convergence of the method to an optimal point for diminishing stepsize. For constant stepsize, we establish some error bounds on the expected distance from the optimal point and the expected function value. We also provide numerical results.
Federated Learning: Challenges, Methods, and Future Directions Federated learning involves training statistical models over remote devices or siloed data centers, such as mobile phones or hospitals, while keeping data localized. Training in heterogeneous and potentially massive networks introduces novel challenges that require a fundamental departure from standard approaches for large-scale machine learning, distributed optimization, and privacy-preserving data analysis. In this article, we discuss the unique characteristics and challenges of federated learning, provide a broad overview of current approaches, and outline several directions of future work that are relevant to a wide range of research communities.
AIR Tools - A MATLAB package of algebraic iterative reconstruction methods We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter and the stopping rule. The relaxation parameter can be fixed, or chosen adaptively in each iteration; in the former case we provide a new ''training'' algorithm that finds the optimal parameter for a given test problem. The stopping rules provided are the discrepancy principle, the monotone error rule, and the NCP criterion; for the first two methods ''training'' can be used to find the optimal discrepancy parameter.
A Survey on Network Methodologies for Real-Time Analytics of Massive IoT Data and Open Research Issues. With the widespread adoption of the Internet of Things (IoT), the number of connected devices is growing at an exponential rate, which is contributing to ever-increasing, massive data volumes. Real-time analytics on the massive IoT data, referred to as the “real-time IoT analytics” in this paper, is becoming the mainstream with an aim to provide an immediate or non-immediate actionable insights an...
Computation Offloading Toward Edge Computing We are living in a world where massive end devices perform computing everywhere and everyday. However, these devices are constrained by the battery and computational resources. With the increasing number of intelligent applications (e.g., augmented reality and face recognition) that require much more computational power, they shift to perform computation offloading to the cloud, known as mobile cloud computing (MCC). Unfortunately, the cloud is usually far away from end devices, leading to a high latency as well as the bad quality of experience (QoE) for latency-sensitive applications. In this context, the emergence of edge computing is no coincidence. Edge computing extends the cloud to the edge of the network, close to end users, bringing ultra-low latency and high bandwidth. Consequently, there is a trend of computation offloading toward edge computing. In this paper, we provide a comprehensive perspective on this trend. First, we give an insight into the architecture refactoring in edge computing. Based on that insight, this paper reviews the state-of-the-art research on computation offloading in terms of application partitioning, task allocation, resource management, and distributed execution, with highlighting features for edge computing. Then, we illustrate some disruptive application scenarios that we envision as critical drivers for the flourish of edge computing, such as real-time video analytics, smart “things” (e.g., smart city and smart home), vehicle applications, and cloud gaming. Finally, we discuss the opportunities and future research directions.
A Proximal Gradient Algorithm for Decentralized Composite Optimization This paper proposes a decentralized algorithm for solving a consensus optimization problem defined in a static networked multi-agent system, where the local objective functions have the smooth+nonsmooth composite form. Examples of such problems include decentralized constrained quadratic programming and compressed sensing problems, as well as many regularization problems arising in inverse problems, signal processing, and machine learning, which have decentralized applications. This paper addresses the need for efficient decentralized algorithms that take advantages of proximal operations for the nonsmooth terms. We propose a proximal gradient exact first-order algorithm (PG-EXTRA) that utilizes the composite structure and has the best known convergence rate. It is a nontrivial extension to the recent algorithm EXTRA. At each iteration, each agent locally computes a gradient of the smooth part of its objective and a proximal map of the nonsmooth part, as well as exchanges information with its neighbors. The algorithm is “exact” in the sense that an exact consensus minimizer can be obtained with a fixed step size, whereas most previous methods must use diminishing step sizes. When the smooth part has Lipschitz gradients, PG-EXTRA has an ergodic convergence rate of O ¡ 1 k ¢ in terms of the first-order optimality residual. When the smooth part vanishes, PG-EXTRA reduces to P-EXTRA, an algorithm without the gradients (so no “G” in the name), which has a slightly improved convergence rate at o ¡ 1 k ¢ in a standard (non-ergodic) sense. Numerical experiments demonstrate effectiveness of PG-EXTRA and validate our convergence results.
Distributed Subgradient Methods For Multi-Agent Optimization We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his/her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.
Synchronization of stochastic dynamical networks under impulsive control with time delays. In this paper, the stochastic synchronization problem is studied for a class of delayed dynamical networks under delayed impulsive control. Different from the existing results on the synchronization of dynamical networks under impulsive control, impulsive input delays are considered in our model. By assuming that the impulsive intervals belong to a certain interval and using the mathematical induction method, several conditions are derived to guarantee that complex networks are exponentially synchronized in mean square. The derived conditions reveal that the frequency of impulsive occurrence, impulsive input delays, and stochastic perturbations can heavily affect the synchronization performance. A control algorithm is then presented for synchronizing stochastic dynamical networks with delayed synchronizing impulses. Finally, two examples are given to demonstrate the effectiveness of the proposed approach.
Dynamic load balancing by random matchings The fundamental problems in dynamic load balancing and job scheduling in parallel and distributed networks involve moving load between processors. In this paper we consider a new model for load movement in synchronous machines. In each step of our model, load can be moved across only a matching set of communication links but across each link any amount of load can be moved. We present an efficient local algorithm for the dynamic load balancing problem under our model of load movement. Our algorithm works on networks of arbitrary topology under possible failure of links. The running time of our algorithm is related to the eigenstructure of the underlying graph. We also present experimental results analyzing issues in load balancing related to our algorithms.
End-to-end routing behavior in the internet The large-scale behavior of routing In the Internet has gone virtually without any formal study, the exceptions being Chinoy's (1993) analysis of the dynamics of Internet routing information, and work, similar in spirit, by Labovitz, Malan, and Jahanian (see Proc. SIGCOMM'97, 1997). We report on an analysis of 40000 end-to-end route measurements conducted using repeated “traceroutes” between 37 Internet sites. We analyze the routing behavior for pathological conditions, routing stability, and routing symmetry. For pathologies, we characterize the prevalence of routing loops, erroneous routing, infrastructure failures, and temporary outages. We find that the likelihood of encountering a major routing pathology more than doubled between the end of 1994 and the end of 1995, rising from 1.5% to 3.3%. For routing stability, we define two separate types of stability, “prevalence”, meaning the overall likelihood that a particular route is encountered, and “persistence”, the likelihood that a route remains unchanged over a long period of time. We find that Internet paths are heavily dominated by a single prevalent route, but that the time periods over which routes persist show wide variation, ranging from seconds up to days. About two-thirds of the Internet paths had routes persisting for either days or weeks. For routing symmetry, we look at the likelihood that a path through the Internet visits at least one different city in the two directions. At the end of 1995, this was the case half the time, and at least one different autonomous system was visited 30% of the time
Low complexity flexible filter banks for uniform and non-uniform channelisation in software radios using coefficient decimation A new approach to implement computationally efficient reconfigurable filter banks (FBs) is presented. If the coefficients of a finite impulse response filter are decimated by M, that is, if every Mth coefficient of the filter is kept unchanged and remaining coefficients are replaced by zeros, a multi-band frequency response will be obtained. The frequency response of the decimated filter will have bands with centre frequencies at 2πk/M, where k is an integer ranging from 0 to M-1. If these multi-band frequency responses are subtracted from each other or selectively masked using inherently low complex wide transition-band masking filters, different low-pass, high-pass, band-pass and band-stop frequency bands are obtained. The resulting FB, whose bands- centre frequencies are located at integer multiples of 2π/M, is a low complexity alternative to the well-known uniform discrete Fourier transform FBs (DFTFBs). It is shown that the channeliser based on the proposed FB does not require any DFT for its implementation unlike a DFTFB. It is also shown that the proposed FB is more flexible and easily reconfigurable than the DFTFB. Furthermore, the proposed FB is able to receive channels of multiple standards simultaneously, whereas separate FBs would be required for simultaneous reception of multi-standard channels in a DFTFB-based receiver. This is achieved through a second stage of coefficient decimation. Implementation result shows that the proposed FB offers an area reduction of 41% and improvement in the speed of 50.8% over DFTFBs.
Prediction of the Spectrum of a Digital Delta–Sigma Modulator Followed by a Polynomial Nonlinearity This paper presents a mathematical analysis of the power spectral density of the output of a nonlinear block driven by a digital delta-sigma modulator. The nonlinearity is a memoryless third-order polynomial with real coefficients. The analysis yields expressions that predict the noise floor caused by the nonlinearity when the input is constant.
Analog Filter Design Using Ring Oscillator Integrators Integrators are key building blocks in many analog signal processing circuits and systems. The DC gain of conventional opamp-RC or Gm- C integrators is severely limited by the gain of operational transconductance amplifier (OTA) used to implement them. Process scaling reduces transistor output resistance, which further exacerbates this issue. We propose applying ring oscillator integrators (ROIs) in the design of high order analog filters. ROIs implemented with simple CMOS inverters achieve infinite DC gain at low supply voltages independent of transistor non-idealities and imperfections such as finite output impedance. Consequently, ROIs scale more effectively into newer processes. A prototype fourth order filter designed using the ROIs was fabricated in 90 nm CMOS and occupies an area of 0.29 mm2. Operating with a 0.55 V supply, the filter consumes 2.9 mW power and achieves bandwidth of 7 MHz, SNR of 61.4 dB, SFDR of 67.6 dB and THD of 60.1 dB. The measured IM3 obtained by feeding two tones at 1 MHz and 2 MHz is 63.4 dB.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1.038276
0.033333
0.033333
0.033333
0.033333
0.020251
0.007409
0.000444
0
0
0
0
0
0
Distributed Continuous-Time Optimization With Scalable Adaptive Event-Based Mechanisms This paper investigates the distributed continuous-time optimization problem, which consists of a group of agents with variant local cost functions. An adaptive consensus-based algorithm with event triggering communications is introduced, which can drive the participating agents to minimize the global cost function and exclude the Zeno behavior. Compared to the existing results, the proposed event-based algorithm is independent of the parameters of the cost functions, using only the relative information of neighboring agents, and hence is fully distributed. Furthermore, the constraints of the convexity of the cost functions are relaxed.
Distributed Resource Allocation Over Directed Graphs via Continuous-Time Algorithms This paper investigates the resource allocation problem for a group of agents communicating over a strongly connected directed graph, where the total objective function of the problem is composted of the sum of the local objective functions incurred by the agents. With local convex sets, we first design a continuous-time projection algorithm over a strongly connected and weight-balanced directed graph. Our convergence analysis indicates that when the local objective functions are strongly convex, the output state of the projection algorithm could asymptotically converge to the optimal solution of the resource allocation problem. In particular, when the projection operation is not involved, we show the exponential convergence at the equilibrium point of the algorithm. Second, we propose an adaptive continuous-time gradient algorithm over a strongly connected and weight-unbalanced directed graph for the reduced case without local convex sets. In this case, we prove that the adaptive algorithm converges exponentially to the optimal solution of the considered problem, where the local objective functions and their gradients satisfy strong convexity and Lipachitz conditions, respectively. Numerical simulations illustrate the performance of our algorithms.
FROST -- Fast row-stochastic optimization with uncoordinated step-sizes. In this paper, we discuss distributed optimization over directed graphs, where doubly stochastic weights cannot be constructed. Most of the existing algorithms overcome this issue by applying push-sum consensus, which utilizes column-stochastic weights. The formulation of column-stochastic weights requires each agent to know (at least) its out-degree, which may be impractical in, for example, broadcast-based communication protocols. In contrast, we describe FROST (Fast Row-stochastic-Optimization with uncoordinated STep-sizes), an optimization algorithm applicable to directed graphs that does not require the knowledge of out-degrees, the implementation of which is straightforward as each agent locally assigns weights to the incoming information and locally chooses a suitable step-size. We show that FROST converges linearly to the optimal solution for smooth and strongly convex functions given that the largest step-size is positive and sufficiently small.
A Continuous-Time Algorithm for Distributed Optimization Based on Multiagent Networks Based on the multiagent networks, this paper introduces a continuous-time algorithm to deal with distributed convex optimization. Using nonsmooth analysis and algebraic graph theory, the distributed network algorithm is modeled by the aid of a nonautonomous differential inclusion, and each agent exchanges information from the first-order and the second-order neighbors. For any initial point, the solution of the proposed network can reach consensus to the set of minimizers if the graph has a spanning tree. In contrast to the existing continuous-time algorithms for distributed optimization, the proposed model holds the least number of state variables and relaxes the strongly connected weighted-balanced topology to the weaker case. The modified form of the proposed continuous-time algorithm is also given, and it is proven that this algorithm is suitable for solving distributed problems if the undirected network is connected. Finally, two numerical examples and an optimal placement problem confirm the effectiveness of the proposed continuous-time algorithm.
Accelerated Convergence Algorithm for Distributed Constrained Optimization under Time-Varying General Directed Graphs. This paper studies a class of distributed convex optimization problems by a set of agents in which each agent only has access to its own local convex objective function and the estimate of each agent is restricted to both coupling linear constraint and individual box constraints. Our focus is to devise a distributed primal-dual gradient algorithm for working out the problem over a sequence of time...
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph (Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445-452; Hen- drickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993). From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to con- sistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan-Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear pro- gramming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.
Controllability and observability of Boolean control networks The controllability and observability of Boolean control networks are investigated. After a brief review on converting a logic dynamics to a discrete-time linear dynamics with a transition matrix, some formulas are obtained for retrieving network and its logical dynamic equations from this network transition matrix. Based on the discrete-time dynamics, the controllability via two kinds of inputs is revealed by providing the corresponding reachable sets precisely. Then the problem of observability is also solved by giving necessary and sufficient conditions.
The M-Machine multicomputer The M-Machine is an experimental multicomputer being developed to test architectural concepts motivated by the constraints of modern semiconductor technology and the demands of programming systems. The M-Machine computing nodes are connected with a 3-D mesh network; each node is a multithreaded processor incorporating 9 function units, on-chip cache, and local memory. The multiple function units are used to exploit both instruction-level and thread-level parallelism. A user accessible message passing system yields fast communication and synchronization between nodes. Rapid access to remote memory is provided transparently to the user with a combination of hardware and software mechanisms. This paper presents the architecture of the M-Machine and describes how its mechanisms attempt to maximize both single thread performance and overall system throughput. The architecture is complete and the MAP chip, which will serve as the M-Machine processing node, is currently being implemented.
SPONGENT: a lightweight hash function This paper proposes spongent - a family of lightweight hash functions with hash sizes of 88 (for preimage resistance only), 128, 160, 224, and 256 bits based on a sponge construction instantiated with a present-type permutation, following the hermetic sponge strategy. Its smallest implementations in ASIC require 738, 1060, 1329, 1728, and 1950 GE, respectively. To our best knowledge, at all security levels attained, it is the hash function with the smallest footprint in hardware published so far, the parameter being highly technology dependent. spongent offers a lot of flexibility in terms of serialization degree and speed. We explore some of its numerous implementation trade-offs. We furthermore present a security analysis of spongent. Basing the design on a present-type primitive provides confidence in its security with respect to the most important attacks. Several dedicated attack approaches are also investigated.
Noise Analysis and Simulation Method for a Single-Slope ADC With CDS in a CMOS Image Sensor Many mixed-signal circuits are nonlinear time-varying systems whose noise estimation cannot be obtained from the conventional frequency domain noise simulation (FNS). Although the transient noise simulation (TNS) supported by a commercial simulator takes into account nonlinear time-varying characteristics of the circuit, its simulation time is unacceptably long to obtain meaningful noise estimatio...
Practical Timing Side Channel Attacks against Kernel Space ASLR Due to the prevalence of control-flow hijacking attacks, a wide variety of defense methods to protect both user space and kernel space code have been developed in the past years. A few examples that have received widespread adoption include stack canaries, non-executable memory, and Address Space Layout Randomization (ASLR). When implemented correctly (i.e., a given system fully supports these protection methods and no information leak exists), the attack surface is significantly reduced and typical exploitation strategies are severely thwarted. All modern desktop and server operating systems support these techniques and ASLR has also been added to different mobile operating systems recently. In this paper, we study the limitations of kernel space ASLR against a local attacker with restricted privileges. We show that an adversary can implement a generic side channel attack against the memory management system to deduce information about the privileged address space layout. Our approach is based on the intrinsic property that the different caches are shared resources on computer systems. We introduce three implementations of our methodology and show that our attacks are feasible on four different x86-based CPUs (both 32- and 64-bit architectures) and also applicable to virtual machines. As a result, we can successfully circumvent kernel space ASLR on current operating systems. Furthermore, we also discuss mitigation strategies against our attacks, and propose and implement a defense solution with negligible performance overhead.
A 12.8 GS/s Time-Interleaved ADC With 25 GHz Effective Resolution Bandwidth and 4.6 ENOB This paper presents a 12.8 GS/s 32-way hierarchically time-interleaved SAR ADC with 4.6 ENOB in 65 nm CMOS. The prototype utilizes hierarchical sampling and cascode sampler circuits to enable greater than 25 GHz 3 dB effective resolution bandwidth (ERBW). We further employ a pseudo-differential SAR ADC to save power and area. The core circuit occupies only 0.23 mm 2 and consumes a total of 162 mW from dual 1.2 V/1.1 V supplies. The design achieves a SNDR of 29.4 dB at low frequencies and 26.4 dB at 25 GHz, resulting in a figure-of-merit of 0.79 pJ/conversion-step. As will be further described in the paper, the circuit architecture used in this prototype enables expansion to 25.6 GS/s or 51.2 GS/s via additional interleaving without significantly impacting ERBW.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.2
0.1
0.05
0.013333
0
0
0
0
0
0
0
0
0
State Machine Replication for the Masses with BFT-SMART The last fifteen years have seen an impressive amount of work on protocols for Byzantine fault-tolerant (BFT) state machine replication (SMR). However, there is still a need for practical and reliable software libraries implementing this technique. BFT-SMART is an open-source Java-based library implementing robust BFT state machine replication. Some of the key features of this library that distinguishes it from similar works (e.g., PBFT and UpRight) are improved reliability, modularity as a first-class property, multicore-awareness, reconfiguration support and a flexible programming interface. When compared to other SMR libraries, BFT-SMART achieves better performance and is able to withstand a number of real-world faults that previous implementations cannot.
Demystifying Fog Computing: Characterizing Architectures, Applications and Abstractions Internet of Things (IoT) has accelerated the deployment of millions of sensors at the edge of the network, through Smart City infrastructure and lifestyle devices. Cloud computing platforms are often tasked with handling these large volumes and fast streams of data from the edge. Recently, Fog computing has emerged as a concept for low-latency and resource-rich processing of these observation streams, to complement Edge and Cloud computing. In this paper, we review various dimensions of system architecture, application characteristics and platform abstractions that are manifest in this Edge, Fog and Cloud eco-system. We highlight novel capabilities of the Edge and Fog layers, such as physical and application mobility, privacy sensitivity, and a nascent runtime environment. IoT application case studies based on first-hand experiences across diverse domains drive this categorization. We also highlight the gap between the potential and the reality of Fog computing, and identify challenges that need to be overcome for the solution to be sustainable. Taken together, our article can help platform and application developers bridge the gap that remains in making Fog computing viable.
An Object Store Service for a Fog/Edge Computing Infrastructure Based on IPFS and a Scale-Out NAS Fog and Edge Computing infrastructures have been proposed to address the latency issue of the current Cloud Computing platforms. While a couple of works illustrated the advantages of these infrastructures in particular for the Internet of Things (IoT) applications, elementary Cloud services that can take advantage of the geo-distribution of resources have not been proposed yet. In this paper, we propose a first-class object store service for Fog/Edge facilities. Our proposal is built with Scale-out Network Attached Storage systems (NAS) and IPFS, a BitTorrent-based object store spread throughout the Fog/Edge infrastructure. Without impacting the IPFS advantages particularly in terms of data mobility, the use of a Scale-out NAS on each site reduces the inter-site exchanges that are costly but mandatory for the metadata management in the original IPFS implementation. Several experiments conducted on Grid'5000 testbed are analyzed and confirmed, first, the benefit of using an object store service spread at the Edge and second, the importance of mitigating inter-site accesses. The paper concludes by giving a few directions to improve the performance and fault tolerance criteria of our Fog/Edge Object Store Service.
SA-Chord: A Self-Adaptive P2P Overlay Network Pure Edge Computing relies on peer-to-peer overlay networks to realize the communication backbone between participating entities. In these settings, entities are characterized by high heterogeneity, mobility, and variability, which introduce runtime uncertainty and may harm the dependability of the network. Departing from state-of-the-art solutions, overlay networks for Pure Edge Computing should take into account the dynamics of the operating environment and self-adapt their topology accordingly, in order to increase the dependability of the communication. To this end, this paper discusses the preliminary development and validation of SA-Chord, a self-adaptive version of the wellknown Chord protocol, able to adapt the network topology according to a given global goal. SA-Chord has been validated through simulation against two distinct goals: (i) minimize energy consumption and, (ii) maximize network throughput. Simulation results are promising and show how SA-Chord efficiently and effectively achieves a given goal.
A proposal of a distributed access control over Fog computing: The ITS use case Internet of Things (IoT) raises many security challenges in relation with the different applications that can be deployed over these environments. IoT access control systems must respond to the new IoT requirements such as scalability, dynamicity, real-time interaction and resources constraint. The goal of this paper is to propose an approach based on Fog and Distributed Hash Table (DHT) toward access control for the Internet of Things. To evaluate the performances of our access solution, we used NS-3 and SUMO. The preliminary obtained results show acceptable overhead for the considered Intelligent Transport System (ITS) scenario.
Fog Computing: Helping the Internet of Things Realize Its Potential. The Internet of Things (IoT) could enable innovations that enhance the quality of life, but it generates unprecedented amounts of data that are difficult for traditional systems, the cloud, and even edge computing to handle. Fog computing is designed to overcome these limitations.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Leveraging on-chip voltage regulators as a countermeasure against side-channel attacks Side-channel attacks have become a significant threat to the integrated circuit security. Circuit level techniques are proposed in this paper as a countermeasure against side-channel attacks. A distributed on-chip power delivery system consisting of multi-level switched capacitor (SC) voltage converters is proposed where the individual interleaved stages are turned on and turned off either based on the workload information or pseudo-randomly to scramble the power consumption profile. In the case that the changes in the workload demand do not trigger the power delivery system to turn on or off individual stages, the active stages are reshuffled with so called converter-reshuffling to insert random spikes in the power consumption profile. An entropy based metric is developed to evaluate the security-performance of the proposed converter-reshuffling technique as compared to three other existing on-chip power delivery schemes. The increase in the power trace entropy with CoRe scheme is also demonstrated with simulation results to further verify the theoretical analysis.
The PARSEC benchmark suite: characterization and architectural implications This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
Linear Amplification with Nonlinear Components A technique for producing bandpass linear amplification with nonlinear components (LINC) is described. The bandpass signal first is separated into two constant envelope component signals. All of the amplitude and phase information of the original bandpass signal is contained in phase modulation on the component signals. These constant envelope signals can be amplified or translated in frequency by amplifiers or mixers which have nonlinear input-output amplitude transfer characteristics. Passive linear combining of the amplified and/or translated component signals produces an amplified and/or translated replica of the original signal.
Opportunistic Information Dissemination in Mobile Ad-hoc Networks: The Profit of Global Synchrony The topic of this paper is the study of Information Dissemination in Mobile Ad-hoc Networks by means of deterministic protocols. We characterize the connectivity resulting from the movement, from failures and from the fact that nodes may join the computation at different times with two values, � and �, so that, withintime slots, some node that has the information must be connected to some node without it for at leasttime slots. The protocols studied are clas- sified into three classes: oblivious (the transmission schedule of a node is only a function of its ID), quasi-oblivious (the transmission schedule may also depend on a global time), and adaptive. The main contribution of this work concerns negative results. Contrasting the lower and upper bounds derived, interesting complexity gaps among protocol- classes are observed. More precisely, in order to guarantee any progress towards solving the problem, it is shown thatmust be at least n 1 in general, but that � 2 (n 2 /log n) if an oblivious protocol is used. Since quasi-oblivious protocols can guarantee progress with � 2 O(n), this represents a significant gap, almost linear in �, between oblivious and quasi-oblivious protocols. Regarding the time to complete the dissemination, a lower bound of (n� + n 3 /log n) is proved for oblivious protocols, which is tight up to a polylogarithmic factor because a constructive O(n� + n 3 log n) upper bound exists for the same class. It is also proved that adaptive protocols require (n� + n 2 ), which is optimal given that a matching upper bound can be proved for quasi-oblivious protocols. These results show that the gap in time complexity between oblivious and quasi- oblivious, and hence adaptive, protocols is almost linear. This gap is what we call the profit of global synchrony, since it represents the gain the network obtains from global synchrony with respect to not having it.
Towards elastic SDR architectures using dynamic task management. SDR platforms integrating several types and numbers of processing elements in System-on-Chips become an attractive solution for baseband processing in wireless systems. In order to cope with the diversity of protocol applications and the heterogeneity of multi-core architectures, a hierarchical approach for workload distribution is proposed in this paper. Specifically, a system-level scheduler is employed to map applications to multiple processing clusters, complemented with a cluster-level scheduler - the CoreManager - for dynamic resource allocation and configuration as well as for task and data scheduling. A performance analysis of the proposed approach is presented, which shows the advantages of dynamic scheduling against a static approach for variable workloads in the LTE-Advanced uplink multi-user scenarios.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.2
0.2
0.2
0.2
0.2
0.04
0
0
0
0
0
0
0
0
Flipping bits in memory without accessing them: an experimental study of DRAM disturbance errors Memory isolation is a key property of a reliable and secure computing system--an access to one memory address should not have unintended side effects on data stored in other addresses. However, as DRAM process technology scales down to smaller dimensions, it becomes more difficult to prevent DRAM cells from electrically interacting with each other. In this paper, we expose the vulnerability of commodity DRAM chips to disturbance errors. By reading from the same address in DRAM, we show that it is possible to corrupt data in nearby addresses. More specifically, activating the same row in DRAM corrupts data in nearby rows. We demonstrate this phenomenon on Intel and AMD systems using a malicious program that generates many DRAM accesses. We induce errors in most DRAM modules (110 out of 129) from three major DRAM manufacturers. From this we conclude that many deployed systems are likely to be at risk. We identify the root cause of disturbance errors as the repeated toggling of a DRAM row's wordline, which stresses inter-cell coupling effects that accelerate charge leakage from nearby rows. We provide an extensive characterization study of disturbance errors and their behavior using an FPGA-based testing platform. Among our key findings, we show that (i) it takes as few as 139K accesses to induce an error and (ii) up to one in every 1.7K cells is susceptible to errors. After examining various potential ways of addressing the problem, we propose a low-overhead solution to prevent the errors
Sparc T4: A Dynamically Threaded Server-on-a-Chip The Sparc T4 is the next generation of Oracle's multicore, multithreaded 64-bit Sparc server processor. It delivers significant performance improvements over its predecessor, the Sparc T3 processor. The authors describe Sparc T4's key features and detail the microarchitecture of the dynamically threaded S3 processor core, which is implemented on Sparc T4.
RHMD: evasion-resilient hardware malware detectors. Hardware Malware Detectors (HMDs) have recently been proposed as a defense against the proliferation of malware. These detectors use low-level features, that can be collected by the hardware performance monitoring units on modern CPUs to detect malware as a computational anomaly. Several aspects of the detector construction have been explored, leading to detectors with high accuracy. In this paper, we explore the question of how well evasive malware can avoid detection by HMDs. We show that existing HMDs can be effectively reverse-engineered and subsequently evaded, allowing malware to hide from detection without substantially slowing it down (which is important for certain types of malware). This result demonstrates that the current generation of HMDs can be easily defeated by evasive malware. Next, we explore how well a detector can evolve if it is exposed to this evasive malware during training. We show that simple detectors, such as logistic regression, cannot detect the evasive malware even with retraining. More sophisticated detectors can be retrained to detect evasive malware, but the retrained detectors can be reverse-engineered and evaded again. To address these limitations, we propose a new type of Resilient HMDs (RHMDs) that stochastically switch between different detectors. These detectors can be shown to be provably more difficult to reverse engineer based on resent results in probably approximately correct (PAC) learnability theory. We show that indeed such detectors are resilient to both reverse engineering and evasion, and that the resilience increases with the number and diversity of the individual detectors. Our results demonstrate that these HMDs offer effective defense against evasive malware at low additional complexity.
Exploiting Correcting Codes: On the Effectiveness of ECC Memory Against Rowhammer Attacks Given the increasing impact of Rowhammer, and the dearth of adequate other hardware defenses, many in the security community have pinned their hopes on error-correcting code (ECC) memory as one of the few practical defenses against Rowhammer attacks. Specifically, the expectation is that the ECC algorithm will correct or detect any bits they manage to flip in memory in real-world settings. However, the extent to which ECC really protects against Rowhammer is an open research question, due to two key challenges. First, the details of the ECC implementations in commodity systems are not known. Second, existing Rowhammer exploitation techniques cannot yield reliable attacks in presence of ECC memory. In this paper, we address both challenges and provide concrete evidence of the susceptibility of ECC memory to Rowhammer attacks. To address the first challenge, we describe a novel approach that combines a custom-made hardware probe, Rowhammer bit flips, and a cold boot attack to reverse engineer ECC functions on commodity AMD and Intel processors. To address the second challenge, we present ECCploit, a new Rowhammer attack based on composable, data-controlled bit flips and a novel side channel in the ECC memory controller. We show that, while ECC memory does reduce the attack surface for Rowhammer, ECCploit still allows an attacker to mount reliable Rowhammer attacks against vulnerable ECC memory on a variety of systems and configurations. In addition, we show that, despite the non-trivial constraints imposed by ECC, ECCploit can still be powerful in practice and mimic the behavior of prior Rowhammer exploits.
TRRespass: Exploiting the Many Sides of Target Row Refresh After a plethora of high-profile RowHammer attacks, CPU and DRAM vendors scrambled to deliver what was meant to be the definitive hardware solution against the RowHammer problem: Target Row Refresh (TRR). A common belief among practitioners is that, for the latest generation of DDR4 systems that are protected by TRR, RowHammer is no longer an issue in practice. However, in reality, very little is known about TRR. How does TRR exactly prevent RowHammer? Which parts of a system are responsible for operating the TRR mechanism? Does TRR completely solve the RowHammer problem or does it have weaknesses? In this paper, we demystify the inner workings of TRR and debunk its security guarantees. We show that what is advertised as a single mitigation mechanism is actually a series of different solutions coalesced under the umbrella term Target Row Refresh. We inspect and disclose, via a deep analysis, different existing TRR solutions and demonstrate that modern implementations operate entirely inside DRAM chips. Despite the difficulties of analyzing in-DRAM mitigations, we describe novel techniques for gaining insights into the operation of these mitigation mechanisms. These insights allow us to build TRRespass, a scalable black-box RowHammer fuzzer that we evaluate on 42 recent DDR4 modules. TRRespass shows that even the latest generation DDR4 chips with in-DRAM TRR, immune to all known RowHammer attacks, are often still vulnerable to new TRR-aware variants of RowHammer that we develop. In particular, TRRespass finds that, on present-day DDR4 modules, RowHammer is still possible when many aggressor rows are used (as many as 19 in some cases), with a method we generally refer to as Many-sided RowHammer. Overall, our analysis shows that 13 out of the 42 modules from all three major DRAM vendors (i.e., Samsung, Micron, and Hynix) are vulnerable to our TRR-aware RowHammer access patterns, and thus one can still mount existing state-of-the-art system-level RowHammer attacks. In addition to DDR4, we also experiment with LPDDR4(X) <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> chips and show that they are susceptible to RowHammer bit flips too. Our results provide concrete evidence that the pursuit of better RowHammer mitigations must continue.
Another Flip in the Wall of Rowhammer Defenses The Rowhammer bug allows unauthorized modification of bits in DRAM cells from unprivileged software, enabling powerful privilege-escalation attacks. Sophisticated Rowhammer countermeasures have been presented, aiming at mitigating the Rowhammer bug or its exploitation. However, the state of the art provides insufficient insight on the completeness of these defenses. In this paper, we present novel Rowhammer attack and exploitation primitives, showing that even a combination of all defenses is ineffective. Our new attack technique, one-location hammering, breaks previous assumptions on requirements for triggering the Rowhammer bug, i.e., we do not hammer multiple DRAM rows but only keep one DRAM row constantly open. Our new exploitation technique, opcode flipping, bypasses recent isolation mechanisms by flipping bits in a predictable and targeted way in userspace binaries. We replace conspicuous and memory-exhausting spraying and grooming techniques with a novel reliable technique called memory waylaying. Memory waylaying exploits system-level optimizations and a side channel to coax the operating system into placing target pages at attacker-chosen physical locations. Finally, we abuse Intel SGX to hide the attack entirely from the user and the operating system, making any inspection or detection of the attack infeasible. Our Rowhammer enclave can be used for coordinated denial-of-service attacks in the cloud and for privilege escalation on personal computers. We demonstrate that our attacks evade all previously proposed countermeasures for commodity systems.
SafeSpec: Banishing the Spectre of a Meltdown with Leakage-Free Speculation. Speculative attacks, such as Spectre and Meltdown, target speculative execution to access privileged data and leak it through a side-channel. In this paper, we introduce (SafeSpec), a new model for supporting speculation in a way that is immune to the side-channel leakage by storing side effects of speculative instructions in separate structures until they commit. Additionally, we address the possibility of a covert channel from speculative instructions to committed instructions before these instructions are committed. We develop a cycle accurate model of modified design of an x86-64 processor and show that the performance impact is negligible.
Ramulator: A Fast and Extensible DRAM Simulator Recently, both industry and academia have proposed many different roadmaps for the future of DRAM. Consequently, there is a growing need for an extensible DRAM simulator, which can be easily modified to judge the merits of today’s DRAM standards as well as those of tomorrow. In this paper, we present Ramulator, a fast and cycle-accurate DRAM simulator that is built from the ground up for extensibility. Unlike existing simulators, Ramulator is based on a generalized template for modeling a DRAM system, which is only later infused with the specific details of a DRAM standard. Thanks to such a decoupled and modular design, Ramulator is able to provide out-of-the-box support for a wide array of DRAM standards: DDR3/4, LPDDR3/4, GDDR5, WIO1/2, HBM, as well as some academic proposals (SALP, AL-DRAM, TLDRAM, RowClone, and SARP). Importantly, Ramulator does not sacrifice simulation speed to gain extensibility: according to our evaluations, Ramulator is 2.5 faster than the next fastest simulator. Ramulator is released under the permissive BSD license.
Is dark silicon useful?: harnessing the four horsemen of the coming dark silicon apocalypse Due to the breakdown of Dennardian scaling, the percentage of a silicon chip that can switch at full frequency is dropping exponentially with each process generation. This utilization wall forces designers to ensure that, at any point in time, large fractions of their chips are effectively dark or dim silicon, i.e., either idle or significantly underclocked. As exponentially larger fractions of a chip's transistors become dark, silicon area becomes an exponentially cheaper resource relative to power and energy consumption. This shift is driving a new class of architectural techniques that "spend" area to "buy" energy efficiency. All of these techniques seek to introduce new forms of heterogeneity into the computational stack. We envision that ultimately we will see widespread use of specialized architectures that leverage these techniques in order to attain orders-of-magnitude improvements in energy efficiency. However, many of these approaches also suffer from massive increases in complexity. As a result, we will need to look towards developing pervasively specialized architectures that insulate the hardware designer and the programmer from the underlying complexity of such systems. In this paper, I discuss four key approaches--the four horsemen--that have emerged as top contenders for thriving in the dark silicon age. Each class carries with its virtues deep-seated restrictions that requires a careful understanding of the underlying tradeoffs and benefits.
GenAx: A Genome Sequencing Accelerator. Genomics can transform health-care through precision medicine. Plummeting sequencing costs would soon make genome testing affordable to the masses. Compute efficiency, however, has to improve by orders of magnitude to sequence and analyze the raw genome data. Sequencing software used today can take several hundreds to thousands of CPU hours to align reads to a reference sequence. This paper presents GenAx, an accelerator for read alignment, a time-consuming step in genome sequencing. It consists of a seeding and seed-extension accelerator. The latter is based on an innovative automata design that was designed from the ground-up to enable hardware acceleration. Unlike conventional Levenshtein automata, it is string independent and scales quadratically with edit distance, instead of string length. It supports critical features commonly used in sequencing such as affine gap scoring and traceback. GenAx provides a throughput of 4,058K reads/s for Illumina 101 bp reads. GenAx achieves 31.7x speedup over the standard BWA-MEM sequence aligner running on a 56--thread dualsocket 14-core Xeon E5 server processor, while reducing power consumption by 12 x and area by 5.6 x.
Distributed multi-agent optimization with state-dependent communication We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. We study a projected multi-agent subgradient algorithm under state-dependent communication. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a “disagreement metric” between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence.
Architectural overview of the SPEAKeasy system SPEAKeasy is a successful implementation of a software-defined radio (SDR) for military applications. It permits general-purpose digital hardware to communicate over a wide range of frequencies, modulation techniques, data encoding methods, cryptographic types, and other communication parameters. The background of SDRs for military and commercial needs is discussed, and the SPEAKeasy architecture is defined
A Closed-Loop Reconfigurable Switched-Capacitor DC-DC Converter for Sub-mW Energy Harvesting Applications Energy harvesting is an emerging technology for powering wireless sensor nodes, enabling battery-free operation of these devices. In an energy harvesting sensor, a power management circuit is required to regulate the variable harvested voltage to provide a constant supply rail for the sensor circuits. The power management circuit needs to be compact, efficient, and robust to the variations of the input voltage and load current. A closed-form power expression and custom control algorithm for regulation of a switched-capacitor DC-DC converter with optimal conversion efficiency are proposed in this paper. The proposed regulation algorithm automatically adjusts both the voltage gain and switching frequency of a switched-capacitor DC-DC converter based on its input voltage and load current, increasing the power efficiency across a wide input voltage range. The design and simulation of a fully integrated circuit based on the proposed power managing approach is presented. This power management circuit has been simulated in a 0.25 μm standard CMOS process and simulation results confirm that with an input voltage ranging from 0.5 V to 2.5 V, the converter can generate a regulated 1.2 V output rail and deliver a maximum load current of 100 μA. The power conversion efficiency is higher than 74% across a wide range of the input voltage with a maximum efficiency of 83%.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.017729
0.019256
0.019256
0.019256
0.015385
0.010889
0.005334
0.002286
0.000308
0.000006
0
0
0
0
Indoor positioning using ambient radio signals: Data acquisition platform for a long-term study This paper presents an ongoing long-term study exploring indoor positioning systems based on ambient radio signals (such as FM, TV and GSM). We introduce an open-source platform designed to facilitate data acquisition in indoor localization experiments. The platform is currently employed for the creation of a public dataset of geo referenced ambient radio signal samples. The paper discusses the system design as well as the challenges and current lessons of the year-long experiment.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Polynomial Fuzzy Models for Nonlinear Control: A Taylor Series Approach Classical Takagi-Sugeno (T-S) fuzzy models are formed by convex combinations of linear consequent local models. Such fuzzy models can be obtained from nonlinear first-principle equations by the well-known sector-nonlinearity modeling technique. This paper extends the sector-nonlinearity approach to the polynomial case. This way, generalized polynomial fuzzy models are obtained. The new class of models is polynomial, both in the membership functions and in the consequent models. Importantly, T-S models become a particular case of the proposed technique. Recent possibilities for stability analysis and controller synthesis are also discussed. A set of examples shows that polynomial modeling is able to reduce conservativeness with respect to standard T-S approaches as the degrees of the involved polynomials increase.
Output-Feedback Tracking Control for Polynomial Fuzzy-Model-Based Control Systems This paper presents the output-feedback tracking control of the polynomial fuzzy-model-based control system which consists of a polynomial fuzzy model representing the nonlinear plant and an output-feedback polynomial fuzzy controller connected in a closed loop. The output-feedback polynomial fuzzy controller is employed to drive the system states of the nonlinear plant to follow those of a stable reference model subject to an H∞ performance. Based on the Lyapunov stability theory, sum-of-squares-based stability conditions are obtained to determine the system stability and facilitate the control synthesis. A feasible solution can be found numerically using the third-party Matlab toolbox SOSTOOLS. Simulation results are provided to demonstrate the merits of the proposed control approach. © 1982-2012 IEEE.
A new positive linear functional filters design for positive linear systems This paper is concerned with a new time domain design of a positive functional filters for linear time-invariant continuous-time positive multivariable systems, affected by bounded disturbances. Roughly speaking, a positive system is a dynamic system whose output remains in the non-negative orthant whenever the initial state and the input is non-negative. The order of the proposed filter is equal to the dimension of the vector to be estimated. This new approach is based on the unbiasedness of the filter using a Sylvester equation; then the problem is solved via Linear Matrix Inequalities (LMI) to find the optimal gain implemented in the positive filter design. All filter matrices are designed, such that the dynamics of the estimation error is positive and asymptotically stable. A numerical example is given to illustrate our approach.
A fundamental control performance limit for a class of positive nonlinear systems. A fundamental performance limit is derived for a class of positive nonlinear systems. The performance limit describes the achievable output response in the presence of a positive disturbance and subject to a sign constraint on the allowable input. An explicit optimal input is derived which minimises the maximum output response whilst ensuring that the minimum output response does not fall below a pre-specified lower bound. The result provides a fundamental performance standard against which all control policies, including closed loop schemes, can be compared. Implications of the result are examined in the context of blood glucose regulation for Type 1 Diabetes.
Output tracking control for a class of continuous-time T-S fuzzy systems This paper investigates the problem of output tracking for nonlinear systems with actuator fault using interval type-2 (IT2) fuzzy model approach. An IT2 state-feedback fuzzy controller is designed to perform the tracking control problem, where the membership functions can be freely chosen since the number of fuzzy rules is different from that of the IT2 T-S fuzzy model. Based on Lyapunov stability theory, an existence condition of IT2 fuzzy H ∞ output tracking controller is obtained to guarantee that the output of the closed-loop IT2 control system can track the output of a given reference model well in the H ∞ sense. Finally, two illustrative examples are given to demonstrate the effectiveness and merits of the proposed design techniques.
A comprehensive review on type 2 fuzzy logic applications: Past, present and future In this paper a concise overview of the work that has been done by various researchers in the area of type-2 fuzzy logic is analyzed and discussed. Type-2 fuzzy systems have been widely applied in the fields of intelligent control, pattern recognition and classification, among others. The overview mainly focuses on past, present and future trends of type-2 fuzzy logic applications. Of utmost importance is the last part, outlining possible areas of applied research in type-2 FL in the future. The major contribution of the paper is briefing of the most relevant work in the area of type-2 fuzzy logic, including its theoretical and practical implications. As well as envisioning possible future works and trends in this area of research. We believe that this paper will provide a good platform for people interested in this area for their future research work.
Type-2 Fuzzy Control For Line Following Using Line Detection Images This work presents a comparative analysis of Type-1 and Type-2 fuzzy controllers to drive an omnidirectional mobile robot in line-following tasks using line detection images. Image processing uses a Prewitt filter for edge detection and determines the error from the line location. The control systems are tested using four different paths from the Robotino (R) SIM simulator. Also, two different strategies in the design and implementation of the controllers are presented. In the first one, a PD controller scheme is extended by using a fuzzy system to have adaptive parameters P and D, additionally, Type-2 Fuzzy sets are used to give robustness to the controller. In the second case, a Fuzzy controller is designed to compute in a direct way the control variables and it is extended to Type-2 Fuzzy controller. Finally, experimental results and comparative analysis are presented for the five control schemes by comparing the running time and the standard deviation to measure the robustness of the control systems.
Controllability and Observability of a Well-Posed System Coupled With a Finite-Dimensional System We consider coupled systems consisting of a well-posed and strictly proper (hence regular) subsystem and a finite-dimensional subsystem connected in feedback. The external world interacts with the coupled system via the finite-dimensional part, which receives the external input and sends out the output. Under several assumptions, we derive well-posedness, regularity, exact (or approximate) controllability and exact (or approximate) observability results for such coupled systems.
Distributed Primal-Dual Subgradient Method for Multiagent Optimization via Consensus Algorithms. This paper studies the problem of optimizing the sum of multiple agents' local convex objective functions, subject to global convex inequality constraints and a convex state constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the Lagrangian function associated with the problem, we propose a distributed algorithm, named the distributed primal-dual subgradient method, to provide approximate saddle points of the Lagrangian function, based on the distributed average consensus algorithms. Under Slater's condition, we obtain bounds on the convergence properties of the proposed method for a constant step size. Simulation examples are provided to demonstrate the effectiveness of the proposed method.
Quadratic programming with one negative eigenvalue is NP-hard We show that the problem of minimizing a concave quadratic function with one concave direction is NP-hard. This result can be interpreted as an attempt to understand exactly what makes nonconvex quadratic programming problems hard. Sahni in 1974 [8] showed that quadratic programming with a negative definite quadratic term (n negative eigenvalues) is NP-hard, whereas Kozlov, Tarasov and Hacijan [2] showed in 1979 that the ellipsoid algorithm solves the convex quadratic problem (no negative eigenvalues) in polynomial time. This report shows that even one negative eigenvalue makes the problem NP-hard.
Bluespec System Verilog: efficient, correct RTL from high level specifications.
1-5.6 Gb/s CMOS clock and data recovery IC with a static phase offset compensated linear phase detector This study presents a 1-5.6 Gb/s CMOS clock and data recovery (CDR) integrated circuit (IC) implemented in a 0.13 μm CMOS process. The CDR uses a half-rate linear phase detector (PD) of which static phase offset is compensated by an additional binary PD and a digital charge pump (CP) calibration block. During initialisation, the static phase offset is detected by the binary PD and the CP current is controlled accordingly to compensate the static phase offset. Also, the architecture of this CDR IC is designed for a clock embedded serial data interface which transfers CDR training clock patterns before normal random data signals. The implemented IC consumes 16-22 mA from a 1.2 V core supply for data rates of 1-5.6 Gb/s and 20 mA from a 3.3 V I/O supply for two preamplifiers and low-voltage differential signalling drivers. When the 231-1 pseudorandom binary sequence is used, the measured bit-error rate is better than 10-12 and the jitter tolerance is 0.3UIpp. The recovered clock jitter is 21.6 and 4.2 psrms for 1 and 5.6 Gb/s data rates, respectively.
Automated text mining for requirements analysis of policy documents Businesses and organizations in jurisdictions around the world are required by law to provide their customers and users with information about their business practices in the form of policy documents. Requirements engineers analyze these documents as sources of requirements, but this analysis is a time-consuming and mostly manual process. Moreover, policy documents contain legalese and present readability challenges to requirements engineers seeking to analyze them. In this paper, we perform a large-scale analysis of 2,061 policy documents, including policy documents from the Google Top 1000 most visited websites and the Fortune 500 companies, for three purposes: (1) to assess the readability of these policy documents for requirements engineers; (2) to determine if automated text mining can indicate whether a policy document contains requirements expressed as either privacy protections or vulnerabilities; and (3) to establish the generalizability of prior work in the identification of privacy protections and vulnerabilities from privacy policies to other policy documents. Our results suggest that this requirements analysis technique, developed on a small set of policy documents in two domains, may generalize to other domains.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.026662
0.026522
0.026326
0.026113
0.022222
0.022222
0.01504
0.007704
0.000005
0
0
0
0
0
Analysis of First-Order Anti-Aliasing Integration Sampler Performance of the first-order anti-aliasing integration sampler used in software-defined radio (SDR) receivers is analyzed versus all practical nonidealities. The nonidealities that are considered in this paper are transconductor finite output resistance, switch resistance, nonzero rise and fall times of the sampling clock, charge injection, clock jitter, and noise. It is proved that the filter i...
MEMS-based RF channel selection for true software-defined cognitive radio and low-power sensor communications An evaluation of the potential for MEMS technologies to realize the RF front-end frequency gating spectrum analyzer function needed by true software-defined cognitive radios and ultra-low-power autonomous sensor network radios is presented. Here, RF channel selection, as opposed to band selection that removes all interferers, even those in band, and passes only the desired channel, is key to substantial potential increases in call volume with simultaneous reductions in power consumption. The relevant MEMS technologies most conducive to RF channel- selecting front-ends include vibrating micromechanical resonators that exhibit record on-chip Qs at gigahertz frequencies; resonant switches that provide extremely efficient switched-mode power gain for both transmit and receive paths; medium-scale integrated micromechanical circuits that implement on/off switchable filter-amplifier banks; and fabrication technologies that integrate MEMS together with foundry CMOS transistors in a fully monolithic low-capacitance single-chip process. The many issues that make realization of RF channel selection a truly challenging proposition include resonator drift stability, mechanical circuit complexity, repeatability and fabrication tolerances, and the need for resonators at gigahertz frequencies with simultaneous high Ω (>30,000) and low impedance (e.g., 50 W for conventional systems). Some perspective on which resonator technologies might best achieve these simultaneous attributes is provided.
A Second-Order Antialiasing Prefilter for a Software-Defined Radio Receiver A new architecture is presented for a sinc2(f) filter intended to sample channels of varying bandwidth when surrounded by blockers and adjacent bands. The sample rate is programmable from 5 to 40 MHz, and aliases are suppressed by 45 dB or more. The noise and linearity performance of the filter is analyzed, and the effects of various imperfections such as transconductor finite output impedance, interchannel gain mismatch, and residual offsets in the channels are studied. Furthermore, it is proved that the filter is robust to the clock jitter. The 0.13- mum CMOS circuit consumes 6 mA from a 1.2-V supply.
Continuous Time Level Crossing Sampling ADC for Bio-Potential Recording Systems In this paper we present a fixed window level crossing sampling analog to digital convertor for bio-potential recording sensors. This is the first proposed and fully implemented fixed window level crossing ADC without local DACs and clocks. The circuit is designed to reduce data size, power, and silicon area in future wireless neurophysiological sensor systems. We built a testing system to measure bio-potential signals and used it to evaluate the performance of the circuit. The bio-potential amplifier offers a gain of 53 dB within a bandwidth of 200 Hz-20 kHz. The input-referred rms noise is 2.8 µV. In the asynchronous level crossing ADC, the minimum delta resolution is 4 mV. The input signal frequency of the ADC is up to 5 kHz. The system was fabricated using the AMI 0.5 µm CMOS process. The chip size is 1.5 mm by 1.5 mm. The power consumption of the 4-channel system from a 3.3 V supply is 118.8 µW in the static state and 501.6 µW with a 240 kS/s sampling rate. The conversion efficiency is 1.6 nJ/conversion.
The path to the software-defined radio receiver After being the subject of speculation for many years, a software-defined radio receiver concept has emerged that is suitable for mobile handsets. A key step forward is the realization that in mobile handsets, it is enough to receive one channel with any bandwidth, situated in any band. Thus, the front-end can be tuned electronically. Taking a cue from a digital front-end, the receiver&#39;s flexible ...
Design Considerations for a Direct RF Sampling Mixer This brief presents a detailed time-domain and frequency-domain analysis of a direct RF sampling mixer. Design considerations such as incomplete charge sharing and large signal nonlinearity are addressed. An accurate frequency-domain transfer function is derived. Estimation of noise figure is given. The analysis applies to the design of sub-sampling mixers that have become important for software-d...
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
A 0.025-mm 2 0.8-V 78.5-dB SNDR VCO-Based Sensor Readout Circuit in a Hybrid PLL- $\Delta\Sigma$ M Structure This article presents a capacitively coupled voltage-controlled oscillator (VCO)-based sensor readout featuring a hybrid phase-locked loop (PLL)- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\Delta \Sigma $ </tex-math></inline-formula> modulator structure. It leverages phase-locking and phase-frequency detector (PFD) array to concurrently perform quantization and dynamic element matching (DEM), much-reducing hardware/power compared with the existing VCO-based readouts’ counting scheme. A low-cost in-cell data-weighted averaging (DWA) scheme is presented to enable a highly linear tri-level digital-to-analog converter (DAC). Fabricated in 40-nm CMOS, the prototype readout achieves 78-dB SNDR in 10-kHz bandwidth, consuming 4.68 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> and 0.025-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> active area. With 172-dB Schreier figure of merit, its efficiency advances the state-of-the-art VCO-based readouts by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$50\times $ </tex-math></inline-formula> .
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Principles of Distributed Systems, 13th International Conference, OPODIS 2009, Nîmes, France, December 15-18, 2009. Proceedings
Variability in TCP round-trip times We measured and analyzed the variability in round trip times (RTTs) within TCP connections using passive measurement techniques. We collected eight hours of bidirectional traces containing over 22 million TCP connections between end-points at a large university campus and almost $1$ million remote locations. Of these, we used over 1 million TCP connections that yield 10 or more valid RTT samples, to examine RTT variability within a TCP connection. Our results indicate that contrary to observations in several previous studies, RTT values within a connection vary widely. Our results have implications for designing better simulation models, and understanding how round trip times affect the dynamic behavior and throughput of TCP connections.
A 112 Mb/s Full Duplex Remotely-Powered Impulse-UWB RFID Transceiver for Wireless NV-Memory Applications. A dual band symmetrical UWB-RFID transceiver for high capacity wireless NV-Memory applications is reported. The circuit exhibits a figure of merit of 58 pJ/b and 48 pJ/b in Tx and Rx respectively, with a 112.5 Mb/s data rate capability. It operates in the 7.9 GHz UWB frequency band for full duplex communication and is remotely powered through a UHF CW signal. The circuit has been implemented in a ...
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.071558
0.037625
0.036792
0.014606
0.004406
0.002508
0.00081
0.000056
0
0
0
0
0
0
Fast Bulk Bitwise AND and OR in DRAM Bitwise operations are an important component of modern day programming, and are used in a variety of applications such as databases. In this work, we propose a new and simple mechanism to implement bulk bitwise AND and OR operations in DRAM, which is faster and more efficient than existing mechanisms. Our mechanism exploits existing DRAM operation to perform a bitwise AND/OR of two DRAM rows completely within DRAM. The key idea is to simultaneously connect three cells to a bitline before the sense-amplification. By controlling the value of one of the cells, the sense amplifier forces the bitline to the bitwise AND or bitwise OR of the values of the other two cells. Our approach can improve the throughput of bulk bitwise AND/OR operations by 9:7X and reduce their energy consumption by 50:5X. Since our approach exploits existing DRAM operation as much as possible, it requires negligible changes to DRAM logic. We evaluate our approach using a real-world implementation of a bit-vector based index for databases. Our mechanism improves the performance of commonly-used range queries by 30% on average.
Toward standardized near-data processing with unrestricted data placement for GPUs 3D-stacked memory devices with processing logic can help alleviate the memory bandwidth bottleneck in GPUs. However, in order for such Near-Data Processing (NDP) memory stacks to be used for different GPU architectures, it is desirable to standardize the NDP architecture. Our proposal enables this standardization by allowing data to be spread across multiple memory stacks as is the norm in high-performance systems without an MMU on the NDP stack. The keys to this architecture are the ability to move data between memory stacks as required for computation, and a partitioned execution mechanism that offloads memory-intensive application segments onto the NDP stack and decouples address translation from DRAM accesses. By enhancing this system with a smart offload selection mechanism that is cognizant of the compute capability of the NDP and cache locality on the host processor, system performance and energy are improved by up to 66.8% and 37.6%, respectively.
GraphiDe: A Graph Processing Accelerator leveraging In-DRAM-Computing In this paper, we propose GraphiDe, a novel DRAM-based processing-in-memory (PIM) accelerator for graph processing. It transforms current DRAM architecture to massively parallel computational units exploiting the high internal bandwidth of the modern memory chips to accelerate various graph processing applications. GraphiDe can be leveraged to greatly reduce energy consumption and latency dealing with underlying adjacency matrix computations by eliminating unnecessary off-chip accesses. The extensive circuit-architecture simulations over three social network data-sets indicate that GraphiDe achieves on average 3.1x energy-efficiency improvement and 4.2x speed-up over the recent DRAM based PIM platform. It achieves ~59x higher energy-efficiency and 83x speed-up over GPU-based acceleration methods.
Evolution of Memory Architecture Computer memories continue to serve the role that they first served in the electronic discrete variable automatic computer (EDVAC) machine documented by John von Neumann, namely that of supplying instructions and operands for calculations in a timely manner. As technology has made possible significantly larger and faster machines with multiple processors, the relative distance in processor cycles ...
Near memory key/value lookup acceleration. In the "Big Data" era, fast lookup of keys in a key/value store is a ubiquitous operation. We have designed a near memory accelerator combining simple hardware building blocks to accelerate lookup in a hash table based key/value store. We report on the co-design of hardware and software to accomplish fast lookup using open addressing. The accelerator implements a batch get command to look up a set of keys in a single request. Using an FPGA emulator, we evaluate the performance of a query workload under a comprehensive range of conditions such as hash table load factor (fill) and query key repeat distribution (likelihood of a key to reappear in a query workload). We emulate two memory configurations: Hybrid Memory Cube (or High Bandwidth Memory), and Storage Class Memory. Our design shows 12.8X -2.9X speedup compared to conventional CPU lookup depending on workload characteristics.
The Influence of the Sigmoid Function Parameters on the Speed of Backpropagation Learning Sigmoid function is the most commonly known function used in feed forward neural networks because of its nonlinearity and the computational simplicity of its derivative. In this paper we discuss a variant sigmoid function with three parameters that denote the dynamic range, symmetry and slope of the function respectively. We illustrate how these parameters influence the speed of backpropagation learning and introduce a hybrid sigmoidal network with different parameter configuration in different layers. By regulating and modifying the sigmoid function parameter configuration in different layers the error signal problem, oscillation problem and asymmetrical input problem can be reduced. To compare the learning capabilities and the learning rate of the hybrid sigmoidal networks with the conventional networks we have tested the two-spirals benchmark that is known to be a very difficult task for backpropagation and their relatives.
High-throughput Pairwise Alignment with the Wavefront Algorithm using Processing-in-Memory We show that the wavefront algorithm can achieve higher pairwise read alignment throughput on a UPMEM PIM system than on a server-grade multi-threaded CPU system.
A scalable processing-in-memory accelerator for parallel graph processing The explosion of digital data and the ever-growing need for fast data analysis have made in-memory big-data processing in computer systems increasingly important. In particular, large-scale graph processing is gaining attention due to its broad applicability from social science to machine learning. However, scalable hardware design that can efficiently process large graphs in main memory is still an open problem. Ideally, cost-effective and scalable graph processing systems can be realized by building a system whose performance increases proportionally with the sizes of graphs that can be stored in the system, which is extremely challenging in conventional systems due to severe memory bandwidth limitations. In this work, we argue that the conventional concept of processing-in-memory (PIM) can be a viable solution to achieve such an objective. The key modern enabler for PIM is the recent advancement of the 3D integration technology that facilitates stacking logic and memory dies in a single package, which was not available when the PIM concept was originally examined. In order to take advantage of such a new technology to enable memory-capacity-proportional performance, we design a programmable PIM accelerator for large-scale graph processing called Tesseract. Tesseract is composed of (1) a new hardware architecture that fully utilizes the available memory bandwidth, (2) an efficient method of communication between different memory partitions, and (3) a programming interface that reflects and exploits the unique hardware design. It also includes two hardware prefetchers specialized for memory access patterns of graph processing, which operate based on the hints provided by our programming model. Our comprehensive evaluations using five state-of-the-art graph processing workloads with large real-world graphs show that the proposed architecture improves average system performance by a factor of ten and achieves 87% average energy reduction over conventional systems.
Polymorphic Pipeline Array: A flexible multicore accelerator with virtualized execution for mobile multimedia applications Mobile computing in the form of smart phones, netbooks, and personal digital assistants has become an integral part of our everyday lives. Moving ahead to the next generation of mobile devices, we believe that multimedia will become a more critical and product-differentiating feature. High definition audio and video as well as 3D graphics provide richer interfaces and compelling capabilities. However, these algorithms also bring different computational challenges than wireless signal processing. Multimedia algorithms are more complex featuring more control flow and variable computational requirements where execution time is not dominated by innermost vector loops. Further, data access is more complex where media applications typically operate on multi-dimensional vectors of data rather than single-dimensional vectors with simple strides. Thus, the design of current mobile platforms requires reexamination to account for these new application domains. In this work, we focus on the design of a programmable, low-power accelerator for multimedia algorithms referred to as a polymorphic pipeline array, or PPA. The PPA is designed with flexibility and programmability as first-order requirements to enable the hardware to be dynamically customizable to the application. PPAs exploit pipeline parallelism found in streaming applications to create a coarse-grain hardware pipeline to execute streaming media applications. PPA resources are allocated to each stage depending on its size and ability to exploit fine-grain parallelism. Experimental results show that real-time media applications can take advantage of the static and dynamic configurability for increased power efficiency.
Theory and Practice of Finding Eviction Sets Many micro-architectural attacks rely on the capability of an attacker to efficiently find small eviction sets: groups of virtual addresses that map to the same cache set. This capability has become a decisive primitive for cache side-channel, rowhammer, and speculative execution attacks. Despite their importance, algorithms for finding small eviction sets have not been systematically studied in the literature. In this paper, we perform such a systematic study. We begin by formalizing the problem and analyzing the probability that a set of random virtual addresses is an eviction set. We then present novel algorithms, based on ideas from threshold group testing, that reduce random eviction sets to their minimal core in linear time, improving over the quadratic state-of-the-art. We complement the theoretical analysis of our algorithms with a rigorous empirical evaluation in which we identify and isolate factors that affect their reliability in practice, such as adaptive cache replacement strategies and TLB thrashing. Our results indicate that our algorithms enable finding small eviction sets much faster than before, and under conditions where this was previously deemed impractical.
Estimating continuous distributions in Bayesian classifiers When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous variables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon the normality assumption and instead use statistical methods for nonparametric density estimation. For a naive Bayesian classifier, we present experimental results on a variety of natural and artificial domains, comparing two methods of density estimation: assuming normality and modeling each conditional distribution with a single Gaussian; and using nonparametric kernel density estimation. We observe large reductions in error on several natural and artificial data sets, which suggests that kernel estimation is a useful tool for learning Bayesian models.
A Gated FM-UWB System With Data-Driven Front-End Power Control This paper presents a frequency modulated ultra-wideband (FM-UWB) transceiver system with RF submodules gated by a data-driven control signal. With the control signal, intermittent operation of key building blocks such as a VCO in the transmitter and a wideband FM demodulator in the receiver is realized. To enable an effective dynamic power control, the transmitter generates higher subcarrier frequency and modulation index than conventional FM-UWB transmitters by utilizing an 8-modulo fractional-N PLL in which a triangular waveform is generated by a relaxation VCO. In the receiver, an envelope detector monitors the presence of incoming signal and enables the data-edge-triggered power control for a wideband FM demodulator and other blocks. A prototype 3.5-4.1 GHz FM-UWB transceiver for on-chip wireline testing is implemented in 0.18-μm CMOS. Experimental results show that the proposed gated FM-UWB system successfully demodulates the FSK information, achieving nearly 53% power saving with the data-driven power control enabled.
Optimal Hybrid Perimeter and Switching Plans Control for Urban Traffic Networks Since centralized control of urban networks with detailed modeling approaches is computationally complex, developing efficient hierarchical control strategies based on aggregate modeling is of great importance. The dynamics of a heterogeneous large-scale urban network is modeled as R homogeneous regions with the macroscopic fundamental diagrams (MFDs) representation. The MFD provides for homogeneous network regions a unimodal, low-scatter relationship between network vehicle density and network space-mean flow. In this paper, the optimal hybrid control problem for an R-region MFD network is formulated as a mixed-integer nonlinear optimization problem, where two types of controllers are introduced: 1) perimeter controllers and 2) switching signal timing plans controllers. The perimeter controllers are located on the border between the regions, as they manipulate the transfer flows between them, while the switching controllers influence the dynamics of the urban regions, as they define the shape of the MFDs and as a result affect the internal flows within each region. Moreover, to decrease the computational complexity due to the nonlinear and nonconvex nature of the optimization problem, we reformulate the problem as a mixed-integer linear programming (MILP) problem utilizing piecewise affine approximation techniques. Two different approaches for transformation of the original model and building up MILP problems are presented, and the performances of the approximated methods along with the original problem formulation are evaluated and compared for different traffic scenarios of a two-region urban case study.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.014077
0.013784
0.013333
0.013333
0.013333
0.013333
0.010667
0.0063
0.00046
0.000013
0
0
0
0
Design Techniques for 48-Gb/s 2.4-pJ/b PAM-4 Baud-Rate CDR With Stochastic Phase Detector This article presents design techniques for a PAM-4 baud-rate digital clock and data recovery (CDR) circuit utilizing a stochastic phase detector (SPD). The proposed baud-rate phase detector (PD) is designed in an inductive and stochastic way, so there is a clear difference from the existing deductive and logical method used in sign-sign Mueller–Müller PD (SS-MMPD), a representative baud-rate PD. By collecting the histograms of the sequential PAM-4 patterns under EARLY and LATE sampling phases and calculating optimal weights, the SPD exhibits optimized phase-locking characteristic that maximizes the PAM-4 vertical eye opening (VEO) compared with the conventional logical approaches. In addition, unlike SS-MMPD, which may suffer from a severe multiple-locking problem, the SPD tracks a unique and optimal sampling phase even with an adaptive decision-feedback equalizer (DFE). For verification, a prototype PAM-4 receiver is fabricated in 40-nm CMOS technology and occupies 0.24 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . Tested with PRBS-7 patterns, it achieves a bit error rate (BER) of less than <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$10^{-11}$ </tex-math></inline-formula> and energy efficiency of 2.4 pJ/b at 48 Gb/s.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Spiking Neural Network Integrated Circuits: A Review of Trends and Future Directions The rapid growth of deep learning, spurred by its successes in various fields ranging from face recognition [1] to game playing [2], has also triggered a growing interest in the design of specialized hardware accelerators to support these algorithms. This specialized hardware targets one of two categories-either operating in datacenters or on mobile devices at the network edge. While energy efficiency is important in both cases, the need is extremely stringent in the latter class of applications due to limited battery life. Several techniques have been used in the past to improve the energy efficiency of these accelerators [3], including reducing off-chip DRAM access, managing data flow across processing elements as well as in-memory computing (IMC) by exploiting analog processing of data within digital memory arrays [4].
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Integrating multiuser dynamic OFDMA into IEEE 802.11a and prototyping it on a real-time software-defined radio testbed Abstract—Multiuser,dynamic,orthogonal,frequency,division multiple,access (OFDMA) can,achieve,high,downlink,capaci- ties in future,wireless networks,by optimizing,the subcarrier allocation for each,user. When,it comes,to the,integration into current wireless local area network (WLAN) standards, dynamic,OFDMA raises several implementation,issues which,are neglected in theoretical papers. Putting this emerging,approach into practice,requires,to treat these issues accordingly,and,to demonstrate the feasibility of the system design. In this paper, we,propose,a dynamic,OFDMA integration,for the physical layer of the widespread,IEEE 802.11a standard. To test our implementation,and demonstrate,its practical relevance we use a pragmatic,approach: We prototype,multiuser,dynamic,OFDMA on a real-time software-defined radio testbed for WLANs. We discuss details of our implementation,and provide measurements showing,that it does not introduce,significant overhead,into the IEEE 802.11a system,at high subcarrier,allocation quality. We particularly focus on the problems,of our integration as well as the concepts,and limitations of the used testbed.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Real-Time PID Control Strategy for Maglev Transportation System via Particle Swarm Optimization This paper focuses on the design of a real-time particle-swarm-optimization-based proportional-integral-differential (PSO-PID) control scheme for the levitated balancing and propulsive positioning of a magnetic-levitation (maglev) transportation system. The dynamic model of a maglev transportation system, including levitated electromagnets and a propulsive linear induction motor based on the concepts of mechanical geometry and motion dynamics, is first constructed. The control objective is to design a real-time PID control methodology via PSO gain selections and to directly ensure the stability of the controlled system without the requirement of strict constraints, detailed system information, and auxiliary compensated controllers despite the existence of uncertainties. The effectiveness of the proposed PSO-PID control scheme for the maglev transportation system is verified by numerical simulations and experimental results, and its superiority is indicated in comparison with PSO-PID in previous literature and conventional sliding-mode (SM) control strategies. With the proposed PSO-PID control scheme, the controlled maglev transportation system possesses the advantages of favorable control performance without chattering phenomena in SM control and robustness to uncertainties superior to fixed-gain PSO-PID control.
Design And Experiment Of A Macro-Micro Planar Maglev Positioning System In this paper, a new planar magnetic levitation (maglev) positioning system is proposed, which is capable of executing dual-axis planar motions purely involving magnetic forces. Functionally, such a mechanism behaves like a planar XY table with micrometer precision. Specifically, in this system, a new structure with an adaptive sliding-mode control (ASMC) algorithm is described, which aims to achieve the following three goals: 1) a large moving range (millimeter level); 2) precise positioning (micrometer level); and 3) fast response. The system consists of a moving carrier platform, six permanent magnets (PMs) attached to the carrier, and six electromagnets mounted on a fixed base. After exploring the characteristics of the magnetic forces between PMs and electromagnets, the general 6-DOF dynamic model of this system is derived and analyzed. Then, because of the naturally unstable behavior inherent in maglev systems, the proposed ASMC guarantees satisfactory performance of the maglev system. Experiments have successfully demonstrated the feasibility and effectiveness of the overall system.
Robust Petri Fuzzy-Neural-Network Control For Linear Induction Motor Drive This study focuses on the development of a robust Petri-fuzzy-neural-network (PFNN) control strategy applied to a linear induction motor (LIM) drive for periodic motion. Based on the concept of the nonlinear state feedback theory, a feedback linearization control (FLC) system is first adopted in order to decouple the thrust force and the flux amplitude of the LIM. However, particular system information is required in the FLC system so that the corresponding control performance is influenced seriously by system uncertainties. Hence, to increase the robustness of the LIM drive for high-performance applications, a robust PFNN control system is investigated based on the model-free control design to retain the decoupled control characteristic of the FLC system. The adaptive tuning algorithms for network parameters are derived in the sense of the Lyapunov stability theorem, such that the stability of the control system can be guaranteed under the occurrence of system uncertainties. The effectiveness of the proposed control scheme is verified by both numerical simulations and experimental results, and the salient merits are indicated in comparison with the FLC system.
Transverse-Flux-Type Cylindrical Linear Synchronous Motor Using Generic Armature Cores for Rotary Machinery This paper presents the design and analysis of a transverse-flux-type cylindrical linear synchronous motor using generic armature cores for rotary machinery that can address the problem of complex structures in conventional transverse-fluxtype topologies. First, the operational principle and structural advantages of the proposed model are explained. The thrust density and cogging force are investigated during the initial design stage using an application in which large thrust density and low cogging force are required. The proposed model is both theoretically and numerically designed by using a magnetic-circuit method and a 3-D finite-element method, respectively. Finally, the results and efficacy of our structural concept are experimentally validated.
Multiphase Active Way Linear Motor: Proof-of-Concept Prototype In this paper, an active way linear synchronous motor with multiphase independent supply is presented. A proof-of-concept prototype of the proposed linear motor has been built, including power electronics and control. The control is based on a multi digital signal processor (multi DSP) master-slave structure. The controller design and the power and control boards have been described. The analytical model of the motor is presented. The analytical results are compared with a finite-element simulation, showing a good agreement. The analytical model has been also used to implement a dynamic model of the whole system in the Matlab/Simulink/Plecs simulation environment. The proof-of-concept prototype has been fully tested with one and two independent gliders. The obtained experimental results show that the chosen multi DSP master-slave structure allows the control of the proposed linear motor with good performances and reasonable costs.
Linear Motor-Powered Transportation: History, Present Status, and Future Outlook An outline of the different fields of application for linear motors in transportation is given. The different types of linear motors are described and compared. The current status of the different linear motors used in the transportation sector is analyzed. Finally, a look at worldwide activities and future prospects is presented.
Nonconvex Model Predictive Control For Commercial Refrigeration We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
Randomized Smoothing for Stochastic Optimization. We analyze convergence rates of stochastic optimization algorithms for nonsmooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we obtain convergence rates of stochastic optimization procedures, both in expectation and with high probability, that have optimal dependence on the variance of the gradient estimates. To the best of our knowledge, these are the first variance-based rates for nonsmooth optimization. We give several applications of our results to statistical estimation problems and provide experimental results that demonstrate the effectiveness of the proposed algorithms. We also describe how a combination of our algorithm with recent work on decentralized optimization yields a distributed stochastic optimization algorithm that is order-optimal.
Error exponents for asymmetric two-user discrete memoryless source-channel coding systems We study the transmission of two discrete memoryless correlated sources, consisting of a common and a private source, over a discrete memoryless multiterminal channel with two transmitters and two receivers. At the transmitter side, the common source is observed by both encoders but the private source can only be accessed by one encoder. At the receiver side, both decoders need to reconstruct the common source, but only one decoder needs to reconstruct the private source. We hence refer to this system by the asymmetric two-user source-channel coding system. We derive a universally achievable lossless joint source-channel coding (JSCC) error exponent pair for the two-user system by using a technique which generalizes Csiszár's type-packing lemma (1980) for the point-to-point (single-user) discrete memoryless source-channel system. We next investigate the largest convergence rate of asymptotic exponential decay of the system (overall) probability of erroneous transmission, i.e., the system JSCC error exponent. We obtain lower and upper bounds for the exponent. As a consequence, we establish a JSCC theorem with single-letter characterization and we show that the separation principle holds for the asymmetric two-user scenario. By introducing common randomization, we also provide a formula for the tandem (separate) source-channel coding error exponent. Numerical examples show that for a large class of systems consisting of two correlated sources and an asymmetric multiple-access channel with additive noise, the JSCC error exponent considerably outperforms the corresponding tandem coding error exponent.
A 5-Gb/s ADC-Based Feed-Forward CDR in 65 nm CMOS This paper presents an ADC-based CDR that blindly samples the received signal at twice the data rate and uses these samples to directly estimate the locations of zero crossings for the purpose of clock and data recovery. We successfully confirmed the operation of the proposed CDR architecture at 5 Gb/s. The receiver is implemented in 65 nm CMOS, occupies 0.51 mm(2) and consumes 178.4 mW at 5 Gb/s.
Modeling of software radio aspects by mapping of SDL and CORBA With the evolution of 3rd generation mobile communications standardization, the software radio concept has the potential to offer a pragmatic solution - a software implementation that allows the mobile terminal to adapt dynamically to its radio environment. The mapping of SDL and CORBA mechanisms is introduced, in order to provide a generic platform for the implementation of future mobile services, supporting standardized interfaces and manufacturer platform independent object and service functionality description. For the functional entity diagram model, it is proposed that the functional entities be designed as objects, the functional entities group as 'open' object oriented SDL platforms, and the interfaces between them as CORBA IDLs, communicating via the ORB in a generic implementation and location independent way. The functional entity groups are proposed to be modeled as SDL block types, while the functional entities and sub-entities as SDL process and service types. The objects interact with each other like client or server objects requesting or receiving services from other objects. Every object has a CORBA IDL interface, which allows every component to be distributed in an optimum way by providing a standardized infrastructure, ensuring interoperability, flexibility, reusability, transparency and management capabilities.
A Highly Adaptive Leader Election Algorithm for Mobile Ad Hoc Networks.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.054942
0.060667
0.053
0.053
0.036367
0.020067
0.005333
0
0
0
0
0
0
0
Minitaur, an Event-Driven FPGA-Based Spiking Network Accelerator Current neural networks are accumulating accolades for their performance on a variety of real-world computational tasks including recognition, classification, regression, and prediction, yet there are few scalable architectures that have emerged to address the challenges posed by their computation. This paper introduces Minitaur, an event-driven neural network accelerator, which is designed for low power and high performance. As an field-programmable gate array-based system, it can be integrated into existing robotics or it can offload computationally expensive neural network tasks from the CPU. The version presented here implements a spiking deep network which achieves 19 million postsynaptic currents per second on 1.5 W of power and supports up to 65 K neurons per board. The system records 92% accuracy on the MNIST handwritten digit classification and 71% accuracy on the 20 newsgroups classification data set. Due to its event-driven nature, it allows for trading off between accuracy and latency.
Spiking Neural Networks Hardware Implementations and Challenges: A Survey Neuromorphic computing is henceforth a major research field for both academic and industrial actors. As opposed to Von Neumann machines, brain-inspired processors aim at bringing closer the memory and the computational elements to efficiently evaluate machine learning algorithms. Recently, spiking neural networks, a generation of cognitive algorithms employing computational primitives mimicking neuron and synapse operational principles, have become an important part of deep learning. They are expected to improve the computational performance and efficiency of neural networks, but they are best suited for hardware able to support their temporal dynamics. In this survey, we present the state of the art of hardware implementations of spiking neural networks and the current trends in algorithm elaboration from model selection to training mechanisms. The scope of existing solutions is extensive; we thus present the general framework and study on a case-by-case basis the relevant particularities. We describe the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level and discuss their related advantages and challenges.
Tianjic: A Unified and Scalable Chip Bridging Spike-Based and Continuous Neural Computation Toward the long-standing dream of artificial intelligence, two successful solution paths have been paved: 1) neuromorphic computing and 2) deep learning. Recently, they tend to interact for simultaneously achieving biological plausibility and powerful accuracy. However, models from these two domains have to run on distinct substrates, i.e., neuromorphic platforms and deep learning accelerators, respectively. This architectural incompatibility greatly compromises the modeling flexibility and hinders promising interdisciplinary research. To address this issue, we build a unified model description framework and a unified processing architecture (Tianjic), which covers the full stack from software to hardware. By implementing a set of integration and transformation operations, Tianjic is able to support spiking neural networks, biological dynamic neural networks, multilayered perceptron, convolutional neural networks, recurrent neural networks, and so on. A compatible routing infrastructure enables homogeneous and heterogeneous scalability on a decentralized many-core network. Several optimization methods are incorporated, such as resource and data sharing, near-memory processing, compute/access skipping, and intra-/inter-core pipeline, to improve performance and efficiency. We further design streaming mapping schemes for efficient network deployment with a flexible tradeoff between execution throughput and resource overhead. A 28-nm prototype chip is fabricated with >610-GB/s internal memory bandwidth. A variety of benchmarks are evaluated and compared with GPUs and several existing specialized platforms. In summary, the fully unfolded mapping can achieve significantly higher throughput and power efficiency; the semi-folded mapping can save 30x resources while still presenting comparable performance on average. Finally, two hybrid-paradigm examples, a multimodal unmanned bicycle and a hybrid neural network, are demonstrated to show the potential of our unified architecture. This article paves a new way to explore neural computing.
Application of Deep Compression Technique in Spiking Neural Network Chip. In this paper, a reconfigurable and scalable spiking neural network processor, containing 192 neurons and 6144 synapses, is developed. By using deep compression technique in spiking neural network chip, the amount of physical synapses can be reduced to 1/16 of that needed in the original network, while the accuracy is maintained. This compression technique can greatly reduce the number of SRAMs inside the chip as well as the power consumption of the chip. This design achieves throughput per unit area of 1.1 GSOP/( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\text{s}\!\cdot\!\text{mm}^2$</tex-math></inline-formula> ) at 1.2 V, and energy consumed per SOP of 35 pJ. A 2-layer fully-connected spiking neural network is mapped to the chip, and thus the chip is able to realize handwritten digit recognition on MNIST with an accuracy of 91.2%.
How the Brain Formulates Memory: A Spatio-Temporal Model Research Frontier. Memory is a complex process across different brain regions and a fundamental function for many cognitive behaviors. Emerging experimental results suggest that memories are represented by populations of neurons and organized in a categorical and hierarchical manner. However, it is still not clear how the neural mechanisms are emulated in computational models. In this paper, we present a spatio-temp...
Spike Counts based Low Complexity SNN Architecture with Binary Synapse. In this paper, we present an energy and area efficient spike neural network (SNN) processor based on novel spike counts based methods. For the low cost SNN design, we propose hardware-friendly complexity reduction techniques for both of learning and inferencing modes of operations. First, for the unsupervised learning process, we propose a spike counts based learning method. The novel learning app...
STDP-Based Pruning of Connections and Weight Quantization in Spiking Neural Networks for Energy-Efficient Recognition Spiking neural networks (SNNs) with a large number of weights and varied weight distribution can be difficult to implement in emerging in-memory computing hardware due to the limitations on crossbar size (implementing dot product), the constrained number of conductance states in non-CMOS devices and the power budget. We present a sparse SNN topology where noncritical connections are pruned to reduce the network size, and the remaining critical synapses are weight quantized to accommodate for limited conductance states. Pruning is based on the power law weight-dependent spike timing dependent plasticity model; synapses between pre- and post-neuron with high spike correlation are retained, whereas synapses with low correlation or uncorrelated spiking activity are pruned. The weights of the retained connections are quantized to the available number of conductance states. The process of pruning noncritical connections and quantizing the weights of critical synapses is performed at regular intervals during training. We evaluated our sparse and quantized network on MNIST dataset and on a subset of images from Caltech-101 dataset. The compressed topology achieved a classification accuracy of 90.1% (91.6%) on the MNIST (Caltech-101) dataset with 3.1X (2.2X) and 4X (2.6X) improvement in energy and area, respectively. The compressed topology is energy and area efficient while maintaining the same classification accuracy of a 2-layer fully connected SNN topology.
The PARSEC benchmark suite: characterization and architectural implications This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
An Experimental Framework For The Evaluation Of Cooperative Diversity Cooperative diversity is the result of relaying among nodes to achieve space diversity in multipath environments that offer limited time and frequency diversity. Although there is now substantial literature covering specification and analysis of Cooperative communication strategies based upon models of wireless environments, there is much less work addressing experiments with real-world radio hardware and propagation channels. This work describes the construction of a three node, experimental testbed based upon a network of software-defined radios for development and verification of cooperative protocols. Several decode-and-forward relay protocols have been implemented and evaluated in terms of their diversity gains as measured from experimental curves of bit-error rate versus average signal-to-noise ratio. In contrast to the few other implementation efforts reported, the experimental setup maintains the relative node geometry while moving the network to induce fading, and the experimental results exhibit diversity benefits.
Controlling the cost of reliability in peer-to-peer overlays Structured peer-to-peer overlay networks provide a useful substrate for building distributed applications but there are general concerns over the cost of maintaining these overlays. The current approach is to configure the overlays statically and conservatively to achieve the desired reliability even under uncommon adverse conditions. This results in high cost in the common case, or poor reliability in worse than expected conditions. We analyze the cost of overlay maintenance in realistic dynamic environments and design novel techniques to reduce this cost by adapting to the operating conditions. With our techniques, the concerns over the overlay maintenance cost are no longer warranted. Simulations using real traces show that they enable high reliability and performance even in very adverse conditions with low maintenance cost.
Accelerating microprocessor silicon validation by exposing ISA diversity Microprocessor design validation is a time consuming and costly task that tends to be a bottleneck in the release of new architectures. The validation step that detects the vast majority of design bugs is the one that stresses the silicon prototypes by applying huge numbers of random tests. Despite its bug detection capability, this step is constrained by extreme computing needs for random tests simulation to extract the bug-free memory image for comparison with the actual silicon image. We propose a self-checking method that accelerates silicon validation and significantly increases the number of applied random tests to improve bug detection efficiency and reduce time-to-market. Analysis of four major ISAs (ARM, MIPS, PowerPC, and x86) reveals their inherent diversity: more than three quarters of the instructions can be replaced with equivalent instructions. We exploit this property in post-silicon validation and propose a methodology for the generation of random tests that detect bugs by comparing results of equivalent instructions. We support our bug detection method in hardware with a light-weight mechanism which, in case of a mismatch, replays the random test replacing the offending instruction with its equivalent. Our bug detection method and corresponding hardware significantly accelerate the post-silicon validation process. Evaluation of the method on an x86 microprocessor model demonstrates its efficiency over simulation-based and self-checking alternatives, in terms of bug detection capabilities and validation time speedup.
Scheduling Analysis of TDMA-Constrained Tasks: Illustration with Software Radio Protocols In this paper a new task model is proposed for scheduling analysis of dependent tasks in radio stations that embed a TDMA communication protocol. TDMA is a channel access protocol that allows several stations to communicate in a same network, by dividing time into several time slots. Tasks handling the TDMA radio protocol are scheduled in a manner to be compliant with the TDMA configuration: task parameters such as execution times, deadlines and release times are constrained by TDMA slots. The periodic task model, commonly used in scheduling analysis, is inefficient for the accurate specification of such systems, resulting in pessimistic scheduling analysis results. To encompass this issue, this paper proposes a new task model called Dependent General Multiframe (DGMF). This model extends the existing GMF model with precedence dependency and shared resource synchronization. We show how to perform scheduling analysis with DGMF by transforming it into a transaction model and using a schedulability test we proposed. In this paper we experiment on "software radio protocols" from Thales Communications & Security, which are representative of the system we want to analyze. Experimental results show an improvement of system schedulability using the proposed analysis technique, compared to existing ones (GMF and periodic tasks). The new task model thus provides a technique to model and analyze TDMA systems with less pessimistic results.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.041979
0.04
0.04
0.04
0.04
0.04
0.0204
0
0
0
0
0
0
0
CMOS Doherty Amplifier With Variable Balun Transformer and Adaptive Bias Control for Wireless LAN Application This paper presents a novel CMOS Doherty power amplifier (PA) with an impedance inverter using a variable balun transformer (VBT) and adaptive bias control of an auxiliary amplifier. Unlike a conventional quarter-wavelength (λ/4) transmission line impedance inverter of a Doherty PA, the proposed VBT impedance inverter can achieve load modulation without any phase delay circuit. As a result, a λ/4 phase compensation circuit at the input path of the auxiliary amplifier can be removed, and the total size of the Doherty PA can be reduced. Additionally, an enhancement of the power efficiency at backed-off power levels can successfully be achieved with an adaptive gate bias in a common gate stage of the auxiliary amplifier. The PA, fabricated with 0.13-μm CMOS technology, achieved a 1-dB compression point (P1 dB) of 31.9 dBm and a power-added efficiency (PAE) at P1 dB of 51%. When the PA is tested with 802.11g WLAN orthogonal frequency division multiplexing (OFDM) signal of 54 Mb/s, a 25-dB error vector magnitude (EVM) compliant output power of 22.8 dBm and a PAE of 30.1% are obtained, respectively.
Quantization Noise Suppression in Digitally Segmented Amplifiers In this paper, we consider the problem of out-of-band quantization noise suppression in the general family of direct digital-to-RF (DDRF) conversion circuits, where the RF carrier is amplitude modulated by a quantized representation of the baseband signal. Hence, it is desired to minimize the out-of-band quantization noise in order to meet stringent requirements such as receive-band noise levels in frequency-division duplex transceivers. In this paper, we address the problem of out-of-band quantization noise by introducing a novel signal-processing solution, which we refer to as ldquosegmented filtering (SF).rdquo We assess the capability of the proposed SF solution by means of performance analysis and results that have been obtained via circuit-level computer simulations as well as laboratory measurements. Our proposed approach has demonstrated the ability to preserve the required signal quality and power amplifier (PA) efficiency while providing more than 35-dB attenuation of the quantization noise, thus eliminating the need for substantial post-PA passband RF filtering.
A 1.9 GHz CMOS Power Amplifier With Embedded Linearizer to Compensate AM-PM Distortion. A series combining transformer(SCT)-based, watt-level 1.9 GHz linear CMOS power amplifier with an on-chip linearizer is demonstrated. Proposed compact, predistortion-based linearizer is embedded in the two-stage PA to compensate AM-PM distortion of the cascode power stages, and improve ACLR of 3GPP WCDMA uplink signal by 2.6 dB at 28.0 dBm output power. The designed interstage power distributor wi...
A +30.5 dBm CMOS Doherty power amplifier with reliability enhancement technique
Highly Efficient RF Transmitter Over Broad Average Power Range Using Multilevel Envelope-Tracking Power Amplifier We present a highly efficient RF transmitter over broad average power range using a multilevel envelope-tracking power amplifier (ML-ET PA). The ML-ET PA delivers enhanced efficiency at a back-off power region for handset applications. The supply modulator consists of a linear regulator and a switching converter. The DC supply of the linear regulator is adjusted according to the average power of the envelope signal, and the power-supply-independent class-AB output stage is employed to avoid the crossover distortion generated by the different DC supply voltages. The switch current level is not optimally adjusted by itself following the power back-off level, because the DC supply voltages of the linear regulator and switching converter are different. For the optimum operation over the entire power region, the switch current level is adjusted by detecting the input envelope voltage level. For a 20-MHz long term evolution signal with a 7.5 dB peak-to-average power ratio, the proposed supply modulator delivers a peak voltage of 4.5 V to a 6.5 load with a measured efficiency of 75.9%. The proposed ET PA delivers a power-added efficiency (PAE) of 40%, gain of 28.8 dB, evolved universal terrestrial radio access adjacent channel leakage ratio of 35.3 dBc, and error vector magnitude of 3.23% at an average output power of 27 dBm and an operating frequency of 1.71-GHz. At a 10 dB back-off point, the PAE is improved from 14.5% to 18.7% compared to the conventional ET PA.
An octave-range watt-level fully integrated CMOS switching power mixer array for linearization and back-off efficiency improvement
Design Considerations for a Direct Digitally Modulated WLAN Transmitter With Integrated Phase Path and Dynamic Impedance Modulation. A 65-nm digitally modulated polar TX for WLAN 802.11g is fully integrated along with baseband digital filtering. The TX employs dynamic impedance modulation to improve efficiency at back-off powers. High-bandwidth phase modulation is achieved efficiently with an open-loop architecture. Operating from 1.2-V/1-V supplies, the TX delivers 16.8 dBm average power at -28-dB EVM with 24.5% drain efficien...
A filtering technique to lower LC oscillator phase noise Based on a physical understanding of phase-noise mechanisms, a passive LC filter is found to lower the phase-noise factor in a differential oscillator to its fundamental minimum. Three fully integrated LC voltage-controlled oscillators (VCOs) serve as a proof of concept. Two 1.1-GHz VCOs achieve -153 dBc/Hz at 3 MHz offset, biased at 3.7 mA from 2.5 V. A 2.1-GHz VCO achieves -148 dBc/Hz at 15 MHz offset, taking 4 mA from a 2.7-V supply. All oscillators use fully integrated resonators, and the first two exceed discrete transistor modules in figure of merit. Practical aspects and repercussions of the technique are discussed
Measurement issues in galvanic intrabody communication: influence of experimental setup Significance: The need for increasingly energyefficient and miniaturized bio-devices for ubiquitous health monitoring has paved the way for considerable advances in the investigation of techniques such as intrabody communication (IBC), which uses human tissues as a transmission medium. However, IBC still poses technical challenges regarding the measurement of the actual gain through the human body. The heterogeneity of experimental setups and conditions used together with the inherent uncertainty caused by the human body make the measurement process even more difficult. Goal: The objective of this work, focused on galvanic coupling IBC, is to study the influence of different measurement equipments and conditions on the IBC channel. Methods: different experimental setups have been proposed in order to analyze key issues such as grounding, load resistance, type of measurement device and effect of cables. In order to avoid the uncertainty caused by the human body, an IBC electric circuit phantom mimicking both human bioimpedance and gain has been designed. Given the low-frequency operation of galvanic coupling, a frequency range between 10 kHz and 1 MHz has been selected. Results: the correspondence between simulated and experimental results obtained with the electric phantom have allowed us to discriminate the effects caused by the measurement equipment. Conclusion: this study has helped us obtain useful considerations about optimal setups for galvanic-type IBC as well as to identify some of the main causes of discrepancy in the literature.
Next-generation wireless communications concepts and technologies Next-generation wireless (NextG) involves the concept that the next generation of wireless communications will be a major move toward ubiquitous wireless communications systems and seamless high-quality wireless services. This article presents the concepts and technologies involved, including possible innovations in architectures, spectrum allocation, and utilization, in radio communications, networks, and services and applications. These include dynamic and adaptive systems and technologies that provide a new paradigm for spectrum assignment and management, smart resource management, dynamic and fast adaptive multilayer approaches, smart radio, and adaptive networking. Technologies involving adaptive and highly efficient modulation, coding, multiple access, media access, network organization, and networking that can provide ultraconnectivity at high data rates with effective QoS for Next Gare are also described
Nonlinear adaptive control of active suspensions In this paper, a previously developed nonlinear "sliding" control law is applied to an electro-hydraulic suspension system. The controller relies on an accurate model of the suspension system. To reduce the error in the model, a standard parameter adaptation scheme, based on Lyapunov analysis, is introduced. A modified adaptation scheme, which enables the identification of parameters whose values change with regions of the state space, is then presented. These parameters are not restricted to being slowly time-varying as in the standard adaptation scheme; however, they are restricted to being constant or slowly time varying within regions of the state space. The adaptation algorithms are coupled with the control algorithm and the resulting system performance is analyzed experimentally. The performance is determined by the ability of the actuator output to track a specified force. The performance of the active system, with and without the adaptation, is analyzed. Simulation and experimental results show that the active system is better than a passive system in terms of improving the ride quality of the vehicle. Furthermore, both of the adaptive schemes improve performance, with the modified scheme giving the greater improvement in performance.
Design and Analysis of a Class-D Stage With Harmonic Suppression. This paper presents the design and analysis of a low-power Class-D stage in 90 nm CMOS featuring a harmonic suppression technique, which cancels the 3rd harmonic by shaping the output voltage waveform. Only digital circuits are used and the short-circuit current present in Class-D inverter-based output stages is eliminated, relaxing the buffer requirements. Using buffers with reduced drive strengt...
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.071332
0.072408
0.072408
0.070842
0.036204
0.023688
0.007601
0.000045
0
0
0
0
0
0
Post-silicon validation opportunities, challenges and recent advances Post-silicon validation is used to detect and fix bugs in integrated circuits and systems after manufacture. Due to sheer design complexity, it is nearly impossible to detect and fix all bugs before manufacture. Post-silicon validation is a major challenge for future systems. Today, it is largely viewed as an art with very few systematic solutions. As a result, post-silicon validation is an emerging research topic with several exciting opportunities for major innovations in electronic design automation. In this paper, we provide an overview of the post-silicon validation problem and how it differs from traditional pre-silicon verification and manufacturing testing. We also discuss major post-silicon validation challenges and recent advances.
Reversi: Post-silicon validation system for modern microprocessors Verification remains an integral and crucial phase of today's microprocessor design and manufacturing process. Unfortunately, with soaring design complexities and decreasing time-to-market windows, today's verification approaches are incapable of fully validating a microprocessor before its release to the public. Increasingly, post-silicon validation is deployed to detect complex functional bugs in addition to exposing electrical and manufacturing defects. This is due to the significantly higher execution performance offered by post-silicon methods, compared to pre-silicon approaches. Validation in the post- silicon domain is predominantly carried out by executing constrained-random test instruction sequences directly on a hardware prototype. However, to identify errors, the state obtained from executing tests directly in hardware must be compared to the one produced by an architectural simulation of the design's golden model. Therefore, the speed of validation is severely limited by the necessity of a costly simulation step. In this work we address this bottleneck in the traditional flow and present a novel solution for post-silicon validation that exposes its native high performance. Our framework, called Reversi, generates random programs in such a way that their correct final state is known at generation time, eliminating the need for architectural simulations. Our experiments show that Reversi generates tests exposing more bugs faster, and can speed up post-silicon validation by 20x compared to traditional flows. I. INTRODUCTION Verification remains an unavoidable, yet quite challenging and time-consuming aspect of the microprocessor design and fabrication process. With shortening product timelines and increasing time-to-market pressure, processor manufacturing houses are forced to pour more and more resources into
Quick detection of difficult bugs for effective post-silicon validation We present a new technique for systematically creating postsilicon validation tests that quickly detect bugs in processor cores and uncore components (cache controllers, memory controllers, on-chip networks) of multi-core System on Chips (SoCs). Such quick detection is essential because long error detection latency, the time elapsed between the occurrence of an error due to a bug and its manifestation as an observable failure, severely limits the effectiveness of existing post-silicon validation approaches. In addition, we provide a list of realistic bug scenarios abstracted from “difficult” bugs that occurred in commercial multi-core SoCs. Our results for an OpenSPARC T2-like multi-core SoC demonstrate: 1. Error detection latencies of “typical” post-silicon validation tests can be very long, up to billions of clock cycles, especially for bugs in uncore components. 2. Our new technique shortens error detection latencies by several orders of magnitude to only a few hundred cycles for most bug scenarios. 3. Our new technique enables 2-fold increase in bug coverage. An important feature of our technique is its software-only implementation without any hardware modification. Hence, it is readily applicable to existing designs.
Threadmill: A post-silicon exerciser for multi-threaded processors Post-silicon validation poses unique challenges that bring-up tools must face, such as the lack of observability into the design, the typical instability of silicon bring-up platforms and the absence of supporting software (like an OS or debuggers). These challenges and the need to reach an optimal utilization of the expensive but very fast silicon platforms lead to unique design considerations - like the need to keep the tool simple and to perform most of its operation on platform without interaction with the environment. In this paper we describe a variety of novel techniques optimized for the unique characteristics of the silicon platform. These techniques are implemented in Threadmill - a bare-metal exerciser targeting multi-threaded processors. Threadmill was used in the verification of the POWER7 processor with encouraging results.
Accelerating microprocessor silicon validation by exposing ISA diversity Microprocessor design validation is a time consuming and costly task that tends to be a bottleneck in the release of new architectures. The validation step that detects the vast majority of design bugs is the one that stresses the silicon prototypes by applying huge numbers of random tests. Despite its bug detection capability, this step is constrained by extreme computing needs for random tests simulation to extract the bug-free memory image for comparison with the actual silicon image. We propose a self-checking method that accelerates silicon validation and significantly increases the number of applied random tests to improve bug detection efficiency and reduce time-to-market. Analysis of four major ISAs (ARM, MIPS, PowerPC, and x86) reveals their inherent diversity: more than three quarters of the instructions can be replaced with equivalent instructions. We exploit this property in post-silicon validation and propose a methodology for the generation of random tests that detect bugs by comparing results of equivalent instructions. We support our bug detection method in hardware with a light-weight mechanism which, in case of a mismatch, replays the random test replacing the offending instruction with its equivalent. Our bug detection method and corresponding hardware significantly accelerate the post-silicon validation process. Evaluation of the method on an x86 microprocessor model demonstrates its efficiency over simulation-based and self-checking alternatives, in terms of bug detection capabilities and validation time speedup.
Malicious Firmware Detection with Hardware Performance Counters. Critical infrastructure components nowadays use microprocessor-based embedded control systems. It is often infeasible, however, to employ the same level of security measures used in general purpose computing systems, due to the stringent performance and resource constraints of embedded control systems. Furthermore, as software sits atop and relies on the firmware for proper operation, software-lev...
Trends in functional verification: a 2014 industry study Technical publications often make either subjective or unsubstantiated claims about today's functional verification process---such as, 70 percent of a project's overall effort is spent in verification. Yet, there are very few credible industry studies that quantitatively provide insight into the functional verification process in terms of verification technology adoption, effort, and effectiveness. To address this dearth of knowledge, a recent world-wide, double-blind, functional verification study was conducted, covering all electronic industry market segments. To our knowledge, this is the largest independent functional verification study ever conducted. This paper presents the findings from our 2014 study and provides invaluable insight into the state of the electronic industry today in terms of both design and verification trends.
Cross-Tenant Side-Channel Attacks in PaaS Clouds We present a new attack framework for conducting cache-based side-channel attacks and demonstrate this framework in attacks between tenants on commercial Platform-as-a-Service (PaaS) clouds. Our framework uses the FLUSH-RELOAD attack of Gullasch et al. as a primitive, and extends this work by leveraging it within an automaton-driven strategy for tracing a victim's execution. We leverage our framework first to confirm co-location of tenants and then to extract secrets across tenant boundaries. We specifically demonstrate attacks to collect potentially sensitive application data (e.g., the number of items in a shopping cart), to hijack user accounts, and to break SAML single sign-on. To the best of our knowledge, our attacks are the first granular, cross-tenant, side-channel attacks successfully demonstrated on state-of-the-art commercial clouds, PaaS or otherwise.
Fully integrated wideband high-current rectifiers for inductively powered devices This paper describes the design and implementation of fully integrated rectifiers in BiCMOS and standard CMOS technologies for rectifying an externally generated RF carrier signal in inductively powered wireless devices, such as biomedical implants, radio-frequency identification (RFID) tags, and smartcards to generate an on-chip dc supply. Various full-wave rectifier topologies and low-power circuit design techniques are employed to decrease substrate leakage current and parasitic components, reduce the possibility of latch-up, and improve power transmission efficiency and high-frequency performance of the rectifier block. These circuits are used in wireless neural stimulating microsystems, fabricated in two processes: the University of Michigan's 3-μm 1M/2P N-epi BiCMOS, and the AMI 1.5-μm 2M/2P N-well standard CMOS. The rectifier areas are 0.12-0.48 mm2 in the above processes and they are capable of delivering >25mW from a receiver coil to the implant circuitry. The performance of these integrated rectifiers has been tested and compared, using carrier signals in 0.1-10-MHz range.
Compiler algorithms for synchronization Translating program loops into a parallel form is one of the most important transformations performed by concurrentizing compilers. This transformation often requires the insertion of synchronization instructions within the body of the concurrent loop. Several loop synchronization techniques are presented first. Compiler algorithms to generate synchronization instructions for singly-nested loops are then discussed. Finally, a technique for the elimination of redundant synchronization instructions is presented.
Nonlinear adaptive control of active suspensions In this paper, a previously developed nonlinear "sliding" control law is applied to an electro-hydraulic suspension system. The controller relies on an accurate model of the suspension system. To reduce the error in the model, a standard parameter adaptation scheme, based on Lyapunov analysis, is introduced. A modified adaptation scheme, which enables the identification of parameters whose values change with regions of the state space, is then presented. These parameters are not restricted to being slowly time-varying as in the standard adaptation scheme; however, they are restricted to being constant or slowly time varying within regions of the state space. The adaptation algorithms are coupled with the control algorithm and the resulting system performance is analyzed experimentally. The performance is determined by the ability of the actuator output to track a specified force. The performance of the active system, with and without the adaptation, is analyzed. Simulation and experimental results show that the active system is better than a passive system in terms of improving the ride quality of the vehicle. Furthermore, both of the adaptive schemes improve performance, with the modified scheme giving the greater improvement in performance.
Control of robotic mobility-on-demand systems: A queueing-theoretical perspective AbstractIn this paper we present queueing-theoretical methods for the modeling, analysis, and control of autonomous mobility-on-demand (MOD) systems wherein robotic, self-driving vehicles transport customers within an urban environment and rebalance themselves to ensure acceptable quality of service throughout the network. We first cast an autonomous MOD system within a closed Jackson network model with passenger loss. It is shown that an optimal rebalancing algorithm minimizing the number of (autonomously) rebalancing vehicles while keeping vehicle availabilities balanced throughout the network can be found by solving a linear program. The theoretical insights are used to design a robust, real-time rebalancing algorithm, which is applied to a case study of New York City and implemented on an eight-vehicle mobile robot testbed. The case study of New York shows that the current taxi demand in Manhattan can be met with about 8,000 robotic vehicles (roughly 70% of the size of the current taxi fleet operating in Manhattan). Finally, we extend our queueing-theoretical setup to include congestion effects, and study the impact of autonomously rebalancing vehicles on overall congestion. Using a simple heuristic algorithm, we show that additional congestion due to autonomous rebalancing can be effectively avoided on a road network. Collectively, this paper provides a rigorous approach to the problem of system-wide coordination of autonomously driving vehicles, and provides one of the first characterizations of the sustainability benefits of robotic transportation networks.
Exploration of Constantly Connected Dynamic Graphs Based on Cactuses. We study the problem of exploration by a mobile entity (agent) of a class of dynamic networks, namely constantly connected dynamic graphs. This problem has already been studied in the case where the agent knows the dynamics of the graph and the underlying graph is a ring of n vertices [5]. In this paper, we consider the same problem and we suppose that the underlying graph is a cactus graph (a connected graph in which any two simple cycles have at most one vertex in common). We propose an algorithm that allows the agent to explore these dynamic graphs in at most 2(O)(root log n)(n) time units. We show that the lower bound of the algorithm is 2(Omega)(root log n)(n) time units.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.054569
0.052544
0.051333
0.041778
0.04
0.04
0.025667
0.000005
0
0
0
0
0
0
Interleaving Energy-Conservation Mode (IECM) Control in Single-Inductor Dual-Output (SIDO) Step-Down Converters With 91% Peak Efficiency The proposed single-inductor dual-output (SIDO) converter with interleaving energy-conservation mode (IECM) control is designed using 65 nm technology to power the ultra-wide band (UWB) system. The energy-conservation mode (ECM) control generates four different energy delivery paths for dual buck outputs with only one inductor. In addition, the superposition technique is used to achieve a minimized inductor current level. The average inductor current is equal to the summation of two output loads. Moreover, the IECM control activates the interleaving operation through the current interleaving mechanism to provide large driving capability as well as to reduce the output voltage ripple. As a result, 91% peak efficiency is derived and the output voltage ripple appears notably minimized by 50% using current interleaving at heavy load. The test chip occupies 1.44 mm2 in 65 nm CMOS and integrates with a three-dimensional (3-D) architecture for inductor integration.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Challenges and Opportunities for Hardware-Assisted Security Improvements in the Field With our growing reliance on computer systems, designers increasingly look to hardware-based solutions for improving security in the face of many cybersecurity threats. Hardware-assisted security can take myriad forms, including integrating hardware components that monitor and respond to unexpected changes in system behaviour. However, systematically making decisions about what types of hardware for security improvement to incorporate, as well as designing and implementing the actual hardware, continues to be challenging. Moreover, the current attitude is that once the hardware has been committed to silicon, it is almost impossible to modify. In this paper, we provide an overview of some of the challenges that designers might face when incorporating hardware-based security approaches into system-on-chip designs and discuss some opportunities for research in this domain. In particular, we focus on design time considerations that can impact the ongoing security of systems in the field.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On the power of waiting when exploring public transportation systems We study the problem of exploration by a mobile entity (agent) of a class of dynamic networks, namely the periodically-varying graphs (the PV-graphs, modeling public transportation systems, among others). These are defined by a set of carriers following infinitely their prescribed route along the stations of the network. Flocchini, Mans, and Santoro [FMS09] (ISAAC 2009) studied this problem in the case when the agent must always travel on the carriers and thus cannot wait on a station. They described the necessary and sufficient conditions for the problem to be solvable and proved that the optimal number of steps (and thus of moves) to explore a n-node PV-graph of k carriers and maximal period p is in Θ(k·p2) in the general case. In this paper, we study the impact of the ability to wait at the stations. We exhibit the necessary and sufficient conditions for the problem to be solvable in this context, and we prove that waiting at the stations allows the agent to reduce the worst-case optimal number of moves by a multiplicative factor of at least Θ(p), while the time complexity is reduced to Θ(n·p). (In any connected PV-graph, we have n≤k·p.) We also show some complementary optimal results in specific cases (same period for all carriers, highly connected PV-graphs). Finally this new ability allows the agent to completely map the PV-graph, in addition to just explore it.
On Temporal Graph Exploration. A temporal graph is a graph in which the edge set can change from step to step. The temporal graph exploration problem TEXP is the problem of computing a foremost exploration schedule for a temporal graph, i.e., a temporal walk that starts at a given start node, visits all nodes of the graph, and has the smallest arrival time. We consider only temporal graphs that are connected at each step. For such temporal graphs with n nodes, we show that it is NP-hard to approximate TEXP with ratio O(n(1-e)) for any e > 0. We also provide an explicit construction of temporal graphs that require theta(n(2)) steps to be explored. We then consider TEXP under the assumption that the underlying graph (i. e. the graph that contains all edges that are present in the temporal graph in at least one step) belongs to a specific class of graphs. Among other results, we show that temporal graphs can be explored in O(n(1.5)k(2) log n) steps if the underlying graph has treewidth k and in O(n log(3) n) steps if the underlying graph is a 2 x n grid. We also show that sparse temporal graphs with regularly present edges can always be explored in O(n) steps.
On the exploration of time-varying networks We study the computability and complexity of the exploration problem in a class of highly dynamic networks: carrier graphs, where the edges between sites exist only at some (unknown) times defined by the periodic movements of mobile carriers among the sites. These graphs naturally model highly dynamic infrastructure-less networks such as public transports with fixed timetables, low earth orbiting (LEO) satellite systems, security guards' tours, etc. We focus on the opportunistic exploration of these graphs, that is by an agent that exploits the movements of the carriers to move in the network. We establish necessary conditions for the problem to be solved. We also derive lower bounds on the amount of time required in general, as well as for the carrier graphs defined by restricted classes of carrier movements. We then prove that the limitations on computability and complexity we have established are indeed tight. In fact we prove that all necessary conditions are also sufficient and all lower bounds on costs are tight. We do so constructively by presenting two optimal solution algorithms, one for anonymous systems, and one for those with distinct node IDs.
Exploration of the T-Interval-Connected Dynamic Graphs: The Case of the Ring In this paper, we study the T-interval-connected dynamic graphs from the point of view of the time necessary and sufficient for their exploration by a mobile entity (agent). A dynamic graph (more precisely, an evolving graph) is T-interval-connected (T ≤ 1) if, for every window of T consecutive time steps, there exists a connected spanning subgraph that is stable (always present) during this period. This property of connection stability over time was introduced by Kuhn, Lynch and Oshman [6] (STOC 2010). We focus on the case when the underlying graph is a ring of size n, and we show that the worst-case time complexity for the exploration problem is 2n '—' T '—' Θ(1) time units if the agent knows the dynamics of the graph, and <InlineEquation ID=\"IEq1\" <EquationSource Format=\"TEX\"$n+ \\frac{n}{\\max\\{1, T-1\\} } (\\delta-1) \\pm \\Theta(\\delta)$</EquationSource> </InlineEquation> time units otherwise, where ﾿ is the maximum time between two successive appearances of an edge.
Efficient routing in carrier-based mobile networks The past years have seen an intense research effort directed at study of delay/disruption tolerant networks and related concepts (intermittently connected networks, opportunistic mobility networks). As a fundamental primitive, routing in such networks has been one of the research foci. While multiple network models have been proposed and routing in them investigated, most of the published results are of heuristic nature with experimental validation; analytical results are scarce and apply mostly to networks whose structure follows deterministic schedule. In this paper, we propose a simple model of opportunistic mobility network based on oblivious carriers, and investigate the routing problem in such networks. We present an optimal online routing algorithm and compare it with a simple shortest-path inspired routing and optimal offline routing. In doing so, we identify the key parameters (the minimum non-zero probability of meeting among the carrier pairs, and the number of carriers a given carrier comes into contact) driving the separation among these algorithms.
Exploring an unknown dangerous graph using tokens Consider a team of (one or more) mobile agents operating in a graph G. Unaware of the graph topology and starting from the same node, the team must explore the graph. This problem, known as graph exploration, was initially formulated by Shannon in 1951, and has been extensively studied since under a variety of conditions. Most of the existing investigations have assumed that the network is safe for the agents, and the vast majority of the solutions presented in the literature succeed in their task only under this assumption. Recently, the exploration problem has been examined also when the network is unsafe. The danger examined is the presence in the network of a black hole, a node that disposes of any incoming agent without leaving any observable trace of this destruction. The goal is for at least one agent to survive and to have all the surviving agents to construct a map of the network, indicating the edges leading to the black hole. This variant of the problem is also known as a black hole search. This problem has been investigated for the most part assuming powerful inter-agent communication mechanisms: whiteboards at all nodes. Indeed, in this model, the black hole search problem can be solved with an optimal team size and performing a polynomial number of moves. In this paper, we consider the less powerful enhanced token model: each agent has available a token that can be carried, placed on a node or on a link, and can be removed from it. All tokens are identical and no other form of marking or communication is available. We constructively prove that the black hole search problem can be solved also in this model; furthermore, this can be done using a team of agents of optimal size and performing a polynomial number of moves. Our algorithm works even if the agents are asynchronous and if both the agents and the nodes are anonymous.
Exploring an unknown graph It is desired to explore all edges of an unknown directed, strongly connected graph. At each point one has a map of all nodes and edges visited, one can recognize these nodes and edges upon seeing them again, and it is known how many unexplored edges emanate from each node visited. The goal is to minimize the ratio of the total number of edges traversed to the optimum number of traversals had the graph been known. For Eulerian graphs this ratio cannot be better than 2, and 2 is achievable by a simple algorithm. In contrast, the ratio is unbounded when the deficiency of the graph (the number of edges that have to be added to make it Eulerian) is unbounded. The main result is an algorithm that achieves a bounded ratio when the deficiency is bounded; unfortunately the ratio is exponential in the deficiency. It is also shown that, when partial information about the graph is available, minimizing the worst-case ratio is PSPACE-complete.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
The M-Machine multicomputer The M-Machine is an experimental multicomputer being developed to test architectural concepts motivated by the constraints of modern semiconductor technology and the demands of programming systems. The M-Machine computing nodes are connected with a 3-D mesh network; each node is a multithreaded processor incorporating 9 function units, on-chip cache, and local memory. The multiple function units are used to exploit both instruction-level and thread-level parallelism. A user accessible message passing system yields fast communication and synchronization between nodes. Rapid access to remote memory is provided transparently to the user with a combination of hardware and software mechanisms. This paper presents the architecture of the M-Machine and describes how its mechanisms attempt to maximize both single thread performance and overall system throughput. The architecture is complete and the MAP chip, which will serve as the M-Machine processing node, is currently being implemented.
Controlling the cost of reliability in peer-to-peer overlays Structured peer-to-peer overlay networks provide a useful substrate for building distributed applications but there are general concerns over the cost of maintaining these overlays. The current approach is to configure the overlays statically and conservatively to achieve the desired reliability even under uncommon adverse conditions. This results in high cost in the common case, or poor reliability in worse than expected conditions. We analyze the cost of overlay maintenance in realistic dynamic environments and design novel techniques to reduce this cost by adapting to the operating conditions. With our techniques, the concerns over the overlay maintenance cost are no longer warranted. Simulations using real traces show that they enable high reliability and performance even in very adverse conditions with low maintenance cost.
Chameleon: a dual-mode 802.11b/Bluetooth receiver system design In this paper, an approach to map the Bluetooth and 802.11b standards specifications into an architecture and specifications for the building blocks of a dual-mode direct conversion receiver is proposed. The design procedure focuses on optimizing the performance in each operating mode while attaining an efficient dual-standard solution. The impact of the expected receiver nonidealities and the characteristics of each building block are evaluated through bit-error-rate simulations. The proposed receiver design is verified through a fully integrated implementation from low-noise amplifier to analog-to-digital converter using IBM 0.25-μm BiCMOS technology. Experimental results from the integrated prototype meet the specifications from both standards and are in good agreement with the target sensitivity.
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.036502
0.042048
0.037092
0.032974
0.022222
0.015583
0.007156
0.000313
0.000025
0
0
0
0
0
BulkVis: a graphical viewer for Oxford nanopore bulk FAST5 files. Motivation The Oxford Nanopore Technologies (ONT) MinION is used for sequencing a wide variety of sample types with diverse methods of sample extraction. Nanopore sequencers output FAST5 files containing signal data subsequently base called to FASTQ format. Optionally, ONT devices can collect data from all sequencing channels simultaneously in a bulk FAST5 file enabling inspection of signal in any channel at any point. We sought to visualize this signal to inspect challenging or difficult to sequence samples. Results The BulkVis tool can load a bulk FAST5 file and overlays MinKNOW (the software that controls ONT sequencers) classifications on the signal trace and can show mappings to a reference. Users can navigate to a channel and time or, given a FASTQ header from a read, jump to its specific position. BulkVis can export regions as Nanopore base caller compatible reads. Using BulkVis, we find long reads can be incorrectly divided by MinKNOW resulting in single DNA molecules being split into two or more reads. The longest seen to date is 2272580 bases in length and reported in eleven consecutive reads. We provide helper scripts that identify and reconstruct split reads given a sequencing summary file and alignment to a reference. We note that incorrect read splitting appears to vary according to input sample type and is more common in 'ultra-long' read preparations. Availability and implementation The software is available freely under an MIT license at https://github.com/LooseLab/bulkvis.
SWIFOLD: Smith-Waterman implementation on FPGA with OpenCL for long DNA sequences. The results suggest that SWIFOLD can be a serious contender for accelerating the SW alignment of DNA sequences of unrestricted size in an affordable way reaching on average 125 GCUPS and almost a peak of 270 GCUPS.
GSWABE: faster GPU-accelerated sequence alignment with optimal alignment retrieval for short DNA sequences In this paper, we present GSWABE, a graphics processing unit GPU-accelerated pairwise sequence alignment algorithm for a collection of short DNA sequences. This algorithm supports all-to-all pairwise global, semi-global and local alignment, and retrieves optimal alignments on Compute Unified Device Architecture CUDA-enabled GPUs. All of the three alignment types are based on dynamic programming and share almost the same computational pattern. Thus, we have investigated a general tile-based approach to facilitating fast alignment by deeply exploring the powerful compute capability of CUDA-enabled GPUs. The performance of GSWABE has been evaluated on a Kepler-based Tesla K40 GPU using a variety of short DNA sequence datasets. The results show that our algorithm can yield a performance of up to 59.1 billions cell updates per second GCUPS, 58.5 GCUPS and 50.3 GCUPS for global, semi-global and local alignment, respectively. Furthermore, on the same system GSWABE runs up to 156.0 times faster than the Streaming SIMD Extensions SSE-based SSW library and up to 102.4 times faster than the CUDA-based MSA-CUDA the first stage in terms of local alignment. Compared with the CUDA-based gpu-pairAlign, GSWABE demonstrates stable and consistent speedups with a maximum speedup of 11.2, 10.7, and 10.6 for global, semi-global, and local alignment, respectively. Copyright © 2014 John Wiley & Sons, Ltd.
Emerging Trends in Design and Applications of Memory-Based Computing and Content-Addressable Memories Content-addressable memory (CAM) and associative memory (AM) are types of storage structures that allow searching by content as opposed to searching by address. Such memory structures are used in diverse applications ranging from branch prediction in a processor to complex pattern recognition. In this paper, we review the emerging challenges and opportunities in implementing different varieties of...
FPGA Accelerated INDEL Realignment in the Cloud The amount of data being generated in genomics is predicted to be between 2 and 40 exabytes per year for the next decade, making genomic analysis the new frontier and the new challenge for precision medicine. This paper explores targeted deployment of hardware accelerators in the cloud to improve the runtime and throughput of immense-scale genomic data analyses. In particular, INDEL (INsertion/DELetion) realignment is a critical operation that enables diagnostic testings of cancer through error correction prior to variant calling. It is the slowest part of the somatic (cancer) genomic analysis pipeline, the alignment refinement pipeline, and represents roughly one-third of the execution time of time-sensitive diagnostics for acute cancer patients.To accelerate genomic analysis, this paper describes a hardware accelerator for INDEL realignment (IR), and a hardware-software framework leveraging FPGAs-as-a-service in the cloud. We chose to implement genomics analytics on FPGAs because genomic algorithms are still rapidly evolving (e.g. the de facto standard "GATK Best Practices" has had five releases since January of this year). We chose to deploy genomics accelerators in the cloud to reduce capital expenditure and to provide a more quantitative performance and cost analysis. We built and deployed a sea of IR accelerators using our hardware-software accelerator development framework on AWS EC2 F1 instances. We show that our IR accelerator system performed 81x better than multi-threaded genomic analysis software while being 32x more cost efficient.
SeGraM: a universal hardware accelerator for genomic sequence-to-graph and sequence-to-sequence mapping A critical step of genome sequence analysis is the mapping of sequenced DNA fragments (i.e., reads) collected from an individual to a known linear reference genome sequence (i.e., sequence-to-sequence mapping). Recent works replace the linear reference sequence with a graph-based representation of the reference genome, which captures the genetic variations and diversity across many individuals in a population. Mapping reads to the graph-based reference genome (i.e., sequence-to-graph mapping) results in notable quality improvements in genome analysis. Unfortunately, while sequence-to-sequence mapping is well studied with many available tools and accelerators, sequence-to-graph mapping is a more difficult computational problem, with a much smaller number of practical software tools currently available. We analyze two state-of-the-art sequence-to-graph mapping tools and reveal four key issues. We find that there is a pressing need to have a specialized, high-performance, scalable, and low-cost algorithm/hardware co-design that alleviates bottlenecks in both the seeding and alignment steps of sequence-to-graph mapping. Since sequence-to-sequence mapping can be treated as a special case of sequence-to-graph mapping, we aim to design an accelerator that is efficient for both linear and graph-based read mapping. To this end, we propose SeGraM, a universal algorithm/hardware co-designed genomic mapping accelerator that can effectively and efficiently support both <u>se</u>quence-to-<u>gra</u>ph <u>m</u>apping and sequence-to-sequence mapping, for both short and long reads. To our knowledge, SeGraM is the first algorithm/hardware co-design for accelerating sequence-to-graph mapping. SeGraM consists of two main components: (1) MinSeed, the first <u>min</u>imizer-based <u>seed</u>ing accelerator, which finds the candidate locations in a given genome graph; and (2) BitAlign, the first <u>bit</u>vector-based sequence-to-graph <u>align</u>ment accelerator, which performs alignment between a given read and the subgraph identified by MinSeed. We couple SeGraM with high-bandwidth memory to exploit low latency and highly-parallel memory access, which alleviates the memory bottleneck. We demonstrate that SeGraM provides significant improvements for multiple steps of the sequence-to-graph (i.e., S2G) and sequence-to-sequence (i.e., S2S) mapping pipelines. First, SeGraM outperforms state-of-the-art S2G mapping tools by 5.9×/3.9× and 106×/- 742× for long and short reads, respectively, while reducing power consumption by 4.1×/4.4× and 3.0×/3.2×. Second, BitAlign outperforms a state-of-the-art S2G alignment tool by 41×-539× and three S2S alignment accelerators by 1.2×-4.8×. We conclude that SeGraM is a high-performance and low-cost universal genomics mapping accelerator that efficiently supports both sequence-to-graph and sequence-to-sequence mapping pipelines.
An FPGA Implementation of A Portable DNA Sequencing Device Based on RISC-V Miniature and mobile DNA sequencers are steadily growing in popularity as effective tools for genetics research. As basecalling algorithms continue to evolve, basecalling poses a serious challenge for small computing devices despite its increasing accuracy. Although general-purpose computing chips such as CPUs and GPUs can achieve fast results, they are not energy efficient enough for mobile applications. This paper presents an innovative solution, a basecalling hardware architecture based on RISC-V ISA, and after validation with our custom FPGA verification platform, it demonstrates a 1.95x energy efficiency ratio compared to x86. There is also a 38% improvement in energy efficiency ratio compared to ARM. In addition, this study also completes the verification work for subsequent ASIC designs.
Accelerating read mapping with FastHASH. With the introduction of next-generation sequencing (NGS) technologies, we are facing an exponential increase in the amount of genomic sequence data. The success of all medical and genetic applications of next-generation sequencing critically depends on the existence of computational techniques that can process and analyze the enormous amount of sequence data quickly and accurately. Unfortunately, the current read mapping algorithms have difficulties in coping with the massive amounts of data generated by NGS.We propose a new algorithm, FastHASH, which drastically improves the performance of the seed-and-extend type hash table based read mapping algorithms, while maintaining the high sensitivity and comprehensiveness of such methods. FastHASH is a generic algorithm compatible with all seed-and-extend class read mapping algorithms. It introduces two main techniques, namely Adjacency Filtering, and Cheap K-mer Selection.We implemented FastHASH and merged it into the codebase of the popular read mapping program, mrFAST. Depending on the edit distance cutoffs, we observed up to 19-fold speedup while still maintaining 100% sensitivity and high comprehensiveness.
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
The Transitive Reduction of a Directed Graph
A new concept for wireless reconfigurable receivers In this article we present the Self-Adaptive Universal Receiver (SAUR), a novel wireless reconfigurable receiver architecture. This scheme is based on blind recognition of the system in use, operating on a new radio interface comprising two functional phases. The first phase performs a wideband analysis (WBA) on the received signal to determine its standard. The second phase corresponds to demodulation. Here we only focus on the WBA phase, which consists of an iterative process to find the bandwidth compatible with the associated signal processing techniques. The blind standard recognition performed in the last iteration step of this process uses radial basis function neural networks. This allows a strong analogy between our approach and conventional pattern recognition problems. The efficiency of this type of blind recognition is illustrated with the results of extensive simulations performed in our laboratory using true data of received signals.
Fpga Implementation Of High-Frequency Software Radio Receiver State-of-the-art analog-to-digital converters allow the design of high-frequency software radio receivers that use baseband signal processing. However, such receivers are rarely considered in literature. In this paper, we describe the design of a high-performance receiver operating at high frequencies, whose digital part is entirely implemented in an FPGA device. The design of digital subsystem is given, together with the design of a low-cost analog front end.
A Hybrid Dynamic Load Balancing Algorithm For Distributed Systems Using Genetic Algorithms Dynamic Load Balancing (DLB) is sine qua non in modern distributed systems to ensure the efficient utilization of computing resources therein. This paper proposes a novel framework for hybrid dynamic load balancing. Its framework uses a Genetic Algorithms (GA) based supernode selection approach within. The GA-based approach is useful in choosing optimally loaded nodes as the supernodes directly from data set, thereby essentially improving the speed of load balancing process. Applying the proposed GA-based approach, this work analyzes the performance of hybrid DLB algorithm under different system states such as lightly loaded, moderately loaded, and highly loaded. The performance is measured with respect to three parameters: average response time, average round trip time, and average completion time of the users. Further, it also evaluates the performance of hybrid algorithm utilizing OnLine Transaction Processing (OLTP) benchmark and Sparse Matrix Vector Multiplication (SPMV) benchmark applications to analyze its adaptability to I/O-intensive, memory-intensive, or/and CPU-intensive applications. The experimental results show that the hybrid algorithm significantly improves the performance under different system states and under a wide range of workloads compared to traditional decentralized algorithm.
OMNI: A Framework for Integrating Hardware and Software Optimizations for Sparse CNNs Convolution neural networks (CNNs) as one of today’s main flavor of deep learning techniques dominate in various image recognition tasks. As the model size of modern CNNs continues to grow, neural network compression techniques have been proposed to prune the redundant neurons and synapses. However, prior techniques disconnect the software neural networks compression and hardware acceleration, whi...
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
A 5-Gb/s ADC-Based Feed-Forward CDR in 65 nm CMOS This paper presents an ADC-based CDR that blindly samples the received signal at twice the data rate and uses these samples to directly estimate the locations of zero crossings for the purpose of clock and data recovery. We successfully confirmed the operation of the proposed CDR architecture at 5 Gb/s. The receiver is implemented in 65 nm CMOS, occupies 0.51 mm(2) and consumes 178.4 mW at 5 Gb/s.
A 5.75 to 44 Gb/s Quarter Rate CDR With Data Rate Selection in 90 nm Bulk CMOS This paper presents a quarter-rate clock and data recovery (CDR) circuit for plesiochronous serial I/O-links. The 2times-oversampling phase-tracking CDR, implemented in 90 nm bulk CMOS technology, covers the whole range of data rates from 5.75 to 44 Gb/s realized in a single IC by the novel feature of a data rate selection logic. Input data are sampled with eight parallel differential master-slave...
A 3x9 Gb/s Shared, All-Digital CDR for High-Speed, High-Density I/O. This paper presents a novel all-digital CDR scheme in 90 nm CMOS. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data (“data clock”) and the other is swept across the delay line (“search clock”). As the search clock is swept, its samples are compared against the data samples to generate...
A 26–28-Gb/s Full-Rate Clock and Data Recovery Circuit With Embedded Equalizer in 65-nm CMOS This paper presents a power and area efficient approach to embed a continuous-time linear equalizer (CTLE) within a clock and data recovery (CDR) circuit implemented in 65-nm CMOS. The merged equalizer/CDR circuit achieves full-rate operation up to 28 Gb/s while drawing 104 mA from a 1-V supply and occupying 0.33 mm2. Current-mode-logic (CML) circuits with shunt peaking loads using customized differential pair layout are used to maximize circuit bandwidth. To minimize the area penalty, differential stacked spiral inductors (DSSIs) are employed extensively. A novel and practical methodology is introduced for designing DSSIs based on single-layer inductors provided in foundry process design kits (PDK). The DSSI design increases the inductance density by over 3 times and the self-resonance frequency by 20% compared to standard single-layer inductors in the PDK. The measured BER of the recovered data by the CDR is less than 10-12 at 27 Gb/s for 211-1 400 mV PP pseudo-random binary sequence (PRBS) as input data. The measured rms jitter of the recovered clock and data are 1.0 and 2.6 ps, respectively. The CDR is able to lock to inputs ranging from 26 to 28 Gb/s with 29-1 PRBS pattern. Measurement results show that with the equalizer enabled, the CDR can recover a 26-Gb/s 27-1 PRBS data with BER ≤ 10-12 after a channel with 9-dB loss at 13 GHz.
A 0.0285mm<sup>2</sup> 0.68pJ/bit Single-Loop Full-Rate Bang-Bang CDR without Reference and Separate Frequency Detector Achieving an 8.2(Gb/s)/µs Acquisition Speed of PAM-4 data in 28nm CMOS A single-loop full-rate bang-bang CDR without the reference and separate frequency detector (FD) is reported. Its phase detector innovates a strobe-point selection scheme and a hybrid control circuit to automate and accelerate the frequency acquisition over a wide frequency range. Prototyped in 28nm CMOS, our CDR achieves a 23-to-29Gb/s capture range of four-level pulse amplitude modulation (PAM-4) data. The acquisition speed [8.2(Gb/s)/μs], die area (0.0285mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> ) and energy efficiency (0.68pJ/bit) compare favorably with the prior art.
Bird'S-Eye View Of Analog And Mixed-Signal Chips For The 21st Century The Internet of Everything (IoE), clearly a 21st century's technology, brilliantly plays with digital data obtained from analog sources, bringing together two different realities, the analog (physical/real), and the digital (cyber/virtual) worlds. Then, with the boundaries of IoE still analog in nature, the required functions at the interface involve sensing, measuring, filtering, converting, processing, and connecting, which imply that the analog layer governs the entire system in terms of accuracy and precision. Furthermore, such interface integrates several analog and mixed-signal subsystems that comprise mainly signal transmission and reception, frequency generation, energy harvesting, data, and power conversion. This paper sets forth a state-of-the-art design perspective of some of the most critical building blocks used in the analog/digital interface, covering wireless cellular transceivers, millimeter-wave frequency generators, energy harvesting interfaces, plus, data and power converters, that exhibit high quality performance achieved through low-power consumption, high energy-efficiency, and high speed.
A 5.4-Gbit/s Adaptive Continuous-Time Linear Equalizer Using Asynchronous Undersampling Histograms We demonstrate a new type of adaptive continuous-time linear equalizer (CTLE) based on asynchronous undersampling histograms. Our CTLE automatically selects the optimal equalizing filter coefficient among several predetermined values by searching for the coefficient that produces the largest peak value in histograms obtained with asynchronous undersampling. This scheme is simple and robust and does not require clock synchronization for its operation. A prototype chip realized in 0.13-μm CMOS technology successfully achieves equalization for 5.4-Gbit/s 231 - 1 pseudorandom bit sequence data through 40-, 80-, and 120-cm PCB traces and 3-m DisplayPort cable. In addition, we present the results of statistical analysis with which we verify the reliability of our scheme for various sample sizes. The results of this analysis are confirmed with experimental data.
An all-digital clock generator using a fractionally injection-locked oscillator in 65nm CMOS Injection locking is an effective method to reduce the jitter of clock generators especially for a ring oscillator-based PLL that has poor phase noise. While the use of injection locking reduces the output jitter, one disadvantage is that the output frequency can be changed only by integer multiples of the reference frequency, if it can be changed at all. In this work, an ADPLL-based clock generator is presented that employs a fractional-injection-locking method that exploits the multiphase output of a ring oscillator. The clock generator achieves an average of 4.23 psrms jitter and a frequency resolution of 1MHz while using a reference clock of 32MHz.
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
Dynamic spectrum access in open spectrum wireless networks One of the reasons for the limitation of bandwidth in current generation wireless networks is the spectrum policy of the Federal Communications Commission (FCC). But, with the spectrum policy reform, open spectrum wireless networks, and spectrum agile radios are set to drive next general wireless networks. In this paper, we investigate continuous-time Markov models for dynamic spectrum access in open spectrum wireless networks. Both queueing and no queueing cases are considered. Analytical results are derived based on the Markov models. A random access protocol is proposed that is shown to achieve airtime fairness. A distributed version of this protocol that uses only local information is also proposed based on homo egualis anthropological model. Inequality aversion by the radio systems to achieve fairness is captured by this model. These protocols are then extended to spectrum agile radios. Extensive simulation results are presented to compare the performances of fixed versus agile radios.
The evolution of hardware platforms for mobile 'software defined radio' terminals. The deployment of communication systems mainly depends on the availability of appropriate microelectronics. Therefore, the Fraunhofer-Institut fur Mikroelektronische Schaltungen und Systeme (IMS) considers the combined approach to communication and microelectronic system design as crucial. This paper explores the impact of anticipated communication services for future wireless communication systems on the evolution of microelectronics for wireless terminals. A roadmap is presented which predicts the hardware/software split of future software defined radio terminals (SDR terminals). Additionally, a new philosophy for analog and digital codesign is introduced, which may help to accelerate the appearance of mobile software defined radio terminals.
Design Aspects of an Active Electromagnetic Suspension System for Automotive Applications. This paper is concerned with the design aspects of an active electromagnet suspension system for automotive appli- cations which combines a brushless tubular permanent magnet actuator (TPMA) with a passive spring. This system provides for additional stability and safety by performing active roll and pitch control during cornering and braking. Furthermore, elimination of the road irregularities is possible, hence passenger drive comfort is increased. Based upon measurements, static and dynamic specifications of the actuator are derived. The electro magnetic suspension is installed on a quarter car test setup, and the improved performance using roll control is measured and compared to a commercial passive system. An alternative design using a slotless external magnet tubular actuator is proposed which fulfills the derived performance, thermal and volume specifications.
A 10-Bit 800-MHz 19-mW CMOS ADC A pipelined ADC employs charge-steering op amps to relax the trade-offs among speed, noise, and power consumption. Applying full-rate nonlinearity and gain error calibration, a prototype realized in 65-nm CMOS technology achieves an SNDR of 52.2 dB at an input frequency of 399.2MHz and an FoM of 53 fJ/conversion-step.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.083312
0.06665
0.05
0.05
0.05
0.025
0.01125
0.000208
0
0
0
0
0
0
A wide common-mode fully-adaptive multi-standard 12.5Gb/s backplane transceiver in 28nm CMOS
A 19-Gb/s Serial Link Receiver With Both 4-Tap FFE and 5-Tap DFE Functions in 45-nm SOI CMOS This paper presents the design of a 19-Gb/s serial link receiver with both 4-tap feed-forward equalizer (FFE) and 5-tap decision-feedback equalizer (DFE), thereby making the equalization system self-contained in the receiver. This design extends existing power-efficient DFEs based on current-integrating summers and adds FFE functionality to the DFE circuit infrastructure for an efficient implementation. Key techniques for implementing receive-side FFE are: the use of multiphase quarter-rate sample-and-hold circuits for generating multiple time-shifted input data signals, time-based analog multiplication for FFE coefficient weighting, and a merged FFE/DFE summer. The receiver test chip, implemented in a 45-nm silicon-on-insulator (SOI) CMOS technology, occupies 0.07 mm2 and has a power efficiency of 6.2 mW/Gb/s at 19 Gb/s. Step-reponse characterization of the receiver demonstrates accurate FFE computation. The receiver equalizes a 35-in PCB trace at 17 Gb/s with a channel loss of 30 dB at 8.5 GHz and a 20-in PCB trace at 19 Gb/s with a channel loss of 25 dB at 9.5 GHz.
Verifying global start-up for a Möbius ring-oscillator This paper presents the formal verification of start-up for a differential ring-oscillator circuit used in industrial designs. We present an efficient algorithm for finding DC equilibria to establish a condition that ensure the oscillator is free from lock-up. Further, we present a formal verification solution for the problem. Using dynamical systems theory, we show that any oscillator must have a non-empty set of states from which it fails to start properly. However, it is possible to show that these failures only occur with zero probability. To do so, this paper generalizes the "cone argument" initially presented in (Mitchell and Greenstreet, in Proceedings of the third workshop on designing correct circuits, 1996 ) and proves the soundness of this generalization. This paper also shows how concepts from analog design such as differential operation can be soundly incorporated into the verification to produce simpler models and reduce the complexity of the verification task.
A 28-Gb/s 4-Tap FFE/15-Tap DFE Serial Link Transceiver in 32-nm SOI CMOS Technology. This paper presents a 28-Gb/s transceiver in 32-nm SOI CMOS technology for chip-to-chip communications over high-loss electrical channels such as backplanes. The equalization needed for such applications is provided by a 4-tap baud-spaced feed-forward equalizer (FFE) in the transmitter and a two-stage peaking amplifier and 15-tap decision-feedback equalizer (DFE) in the receiver. The transmitter e...
Fully Digital Transmit Equalizer With Dynamic Impedance Modulation. This paper analyzes the energy efficiency of different transmit equalizer driver topologies. Dynamic impedance modulation is found to be the most energy-efficient mechanism for transmit pre-emphasis, when compared with impedance-maintaining current and voltage-mode drivers. The equalizing transmitter is implemented as a digital push-pull impedance-modulating (RM) driver with fully digital RAM-DAC ...
A 14-mW 6.25-Gb/s Transceiver in 90-nm CMOS This paper describes a 6.25-Gb/s 14-mW transceiver in 90-nm CMOS for chip-to-chip applications. The transceiver employs a number of features for reducing power consumption, including a shared LC-PLL clock multiplier, an inductor-loaded resonant clock distribution network, a low- and programmable-swing voltage-mode transmitter, software-controlled clock and data recovery (CDR) and adaptive equaliza...
21.1 A 1.7GHz MDLL-based fractional-N frequency synthesizer with 1.4ps RMS integrated jitter and 3mW power using a 1b TDC The introduction of inductorless frequency synthesizers into standardized wireless systems still requires a high level of innovation in order to achieve the stringent requirements of low noise and low power consumption. Synthesizers based on the so-called multiplying delay-locked loop (MDLL) represent one of the most promising architectures in this direction [1-3]. An MDLL resembles a ring oscillator, in which the signal edge traveling along the delay line is periodically refreshed by a clean edge of the reference clock. In this manner, the phase noise of the ring oscillator is filtered up to half the reference frequency and the total output jitter is reduced significantly. Unfortunately, the concept of MDLL, and in general of injection locking (IL), is inherently limited to integer-N synthesis, which makes it unacceptable in practical RF systems. A first extension of injection locking to coarse fractional-N resolution has been shown in [4], in which however the fractional resolution is bounded to the inverse of the number of ring-oscillator delay stages. This paper introduces a fractional-N MDLL-based frequency synthesizer with a 1b time/digital converter (TDC), which is able to outreach the performance of inductorless fractional-N synthesizers. The prototype synthesizes frequencies between 1.6 and 1.9GHz with 190Hz resolution and achieves RMS integrated jitter of 1.4ps at 3mW power consumption, even in the worst-case of near-integer channel.
A 2.4-GHz 6.4-mW fractional-N inductorless RF synthesizer. A cascaded synthesizer architecture incorporates a digital delay-line-based filter and an analog noise trap to suppress the quantization noise of the ΣA modulator. Operating with a reference frequency of 22.6 MHz, the synthesizer achieves a bandwidth of 10 MHz in the first loop and 12 MHz in the second, heavily suppressing the phase noise of its constituent ring oscillators. Realized in 45-nm digi...
GPUWattch: enabling energy optimizations in GPGPUs General-purpose GPUs (GPGPUs) are becoming prevalent in mainstream computing, and performance per watt has emerged as a more crucial evaluation metric than peak performance. As such, GPU architects require robust tools that will enable them to quickly explore new ways to optimize GPGPUs for energy efficiency. We propose a new GPGPU power model that is configurable, capable of cycle-level calculations, and carefully validated against real hardware measurements. To achieve configurability, we use a bottom-up methodology and abstract parameters from the microarchitectural components as the model's inputs. We developed a rigorous suite of 80 microbenchmarks that we use to bound any modeling uncertainties and inaccuracies. The power model is comprehensively validated against measurements of two commercially available GPUs, and the measured error is within 9.9% and 13.4% for the two target GPUs (GTX 480 and Quadro FX5600). The model also accurately tracks the power consumption trend over time. We integrated the power model with the cycle-level simulator GPGPU-Sim and demonstrate the energy savings by utilizing dynamic voltage and frequency scaling (DVFS) and clock gating. Traditional DVFS reduces GPU energy consumption by 14.4% by leveraging within-kernel runtime variations. More finer-grained SM cluster-level DVFS improves the energy savings from 6.6% to 13.6% for those benchmarks that show clustered execution behavior. We also show that clock gating inactive lanes during divergence reduces dynamic power by 11.2%.
Synopsis diffusion for robust aggregation in sensor networks Abstract Aggregating sensor readings within the network is an essen - tial technique for conserving energy in sensor networks Pre - vious work proposes aggregating along a tree overlay topol - ogy in order to conserve energy However, a tree overlay is very fragile, and the high rate of node and link failures in sensor networks often results in a large fraction of readings being unaccounted for in the aggregate Value splitting on multi - path overlays, as proposed in TAG, reduces the vari - ance in the error, but still results in signi cant errors Pre - vious approaches are fragile, fundamentally, because they tightly couple aggregate computation and message routing In this paper, we propose a family of aggregation techniques, called synopsis diffusion , that decouples the two, enabling aggregation algorithms and message routing to be optimized independently As a result, the level of redundancy in mes - sage routing (as a trade - off with energy consumption) can be adapted to both expected and encountered network condi - tions We present a number of concrete examples of synopsis diffusion algorithms, including a broadcast - based instantia - tion of synopsis diffusion that is as energy ef cient as a tree, but dramatically more robust
Efficient Broadcast in Structured P2P Networks In this position paper, we present an efficient algorithm for performing a broadcast operation with minimal cost in structured DHT-based P2P networks. In a system of N nodes, a broadcast message originating at an arbitrary node reaches all other nodes after exactly N - 1 messages. We emphasize the perception of a class of DHT systems as a form of distributed k-ary search and we take advantage of that perception in constructing a spanning tree that is utilized for efficient broadcasting. We consider broadcasting as a basic service that adds to existing DHTs the ability to search using arbitrary queries as well as dissiminate/collect global information.
Investigation of the Energy Regeneration of Active Suspension System in Hybrid Electric Vehicles This paper investigates the idea of the energy regeneration of active suspension (AS) system in hybrid electric vehicles (HEVs). For this purpose, extensive simulation and control methods are utilized to develop a simultaneous simulation in which both HEV powertrain and AS systems are simulated in a unified medium. In addition, a hybrid energy storage system (ESS) comprising electrochemical batteries and ultracapacitors (UCs) is proposed for this application. Simulation results reveal that the regeneration of the AS energy results in an improved fuel economy. Moreover, by using the hybrid ESS, AS load fluctuations are transferred from the batteries to the UCs, which, in turn, will improve the efficiency of the batteries and increase their life.
P2P-Based Service Distribution over Distributed Resources Dynamic or demand-driven service deployment in a Grid or Cloud environment is an important issue considering the varying nature of demand. Most distributed frameworks either offer static service deployment which results in resource allocation problems, or, are job-based where for each invocation, the job along with the data has to be transferred for remote execution resulting in increased communication cost. An alternative approach is dynamic demand-driven provisioning of services as proposed in earlier literature, but the proposed methods fail to account for the volatility of resources in a Grid environment. In this paper, we propose a unique peer-to-peer based approach for dynamic service provisioning which incorporates a Bit-Torrent like protocol for provisioning the service on a remote node. Being built around a P2P model, the proposed framework caters to resource volatility and also incurs lower provisioning cost.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.061872
0.04425
0.04296
0.021506
0.0088
0.001573
0.00052
0.000037
0
0
0
0
0
0
Query Learning-Based Scheme for Pertinent Resource Lookup in Mobile P2P Networks. P2P networking has grasped an increasing interest worldwide among both researchers and computer networking professionals. As a witness, several P2P applications mainly used for file sharing over the Internet have been proposed. Considering the great success of mobile devices in recent years, P2P applications have also been deployed over mobile networks such as mobile ad-hoc networks (MANETs). However, the mismatch between the P2P overlay and the MANET underlay topologies makes the resources lookup mechanism in mobile P2P applications very difficult. Therefore, this downside is the main hindrance to the deployment of such applications over MANETs. To overcome the mismatch issue, we propose in this paper RLSM-P2P a cross-layer resource lookup scheme for Mobile P2P applications. The main thrust of RLSM-P2P consists of building an efficient unstructured P2P overlay that closely matches the underlay physical network and swiftly adapts to its volatility and dynamicity by considering different MANET constraints. Furthermore, RLSM-P2P relies on a query learning resource lookup mechanism for locating pertinent resources to user queries. The performed experiments show that RLSM-P2P outperforms its competitors in terms of effectiveness and efficiency.
Enhancing peer-to-peer content discovery techniques over mobile ad hoc networks Content dissemination over mobile ad hoc networks (MANETs) is usually performed using peer-to-peer (P2P) networks due to its increased resiliency and efficiency when compared to client-server approaches. P2P networks are usually divided into two types, structured and unstructured, based on their content discovery strategy. Unstructured networks use controlled flooding, while structured networks use distributed indexes. This article evaluates the performance of these two approaches over MANETs and proposes modifications to improve their performance. Results show that unstructured protocols are extremely resilient, however they are not scalable and present high energy consumption and delay. Structured protocols are more energy-efficient, however they have a poor performance in dynamic environments due to the frequent loss of query messages. Based on those observations, we employ selective forwarding to decrease the bandwidth consumption in unstructured networks, and introduce redundant query messages in structured P2P networks to increase their success ratio.
Reducing query overhead through route learning in unstructured peer-to-peer network In unstructured peer-to-peer networks, such as Gnutella, peers propagate query messages towards the resource holders by flooding them through the network. This is, however, a costly operation since it consumes node and link resources excessively and often unnecessarily. There is no reason, for example, for a peer to receive a query message if the peer has no matching resource or is not on the path to a peer holding a matching resource. In this paper, we present a solution to this problem, which we call Route Learning, aiming to reduce query traffic in unstructured peer-to-peer networks. In Route Learning, peers try to identify the most likely neighbors through which replies can be obtained to submitted queries. In this way, a query is forwarded only to a subset of the neighbors of a peer, or it is dropped if no neighbor, likely to reply, is found. The scheme also has mechanisms to cope with variations in user submitted queries, like changes in the keywords. The scheme can also evaluate the route for a query for which it is not trained. We show through simulation results that when compared to a pure flooding based querying approach, our scheme reduces bandwidth overhead significantly without sacrificing user satisfaction.
A Trusted Routing Scheme Using Blockchain and Reinforcement Learning for Wireless Sensor Networks. A trusted routing scheme is very important to ensure the routing security and efficiency of wireless sensor networks (WSNs). There are a lot of studies on improving the trustworthiness between routing nodes, using cryptographic systems, trust management, or centralized routing decisions, etc. However, most of the routing schemes are difficult to achieve in actual situations as it is difficult to dynamically identify the untrusted behaviors of routing nodes. Meanwhile, there is still no effective way to prevent malicious node attacks. In view of these problems, this paper proposes a trusted routing scheme using blockchain and reinforcement learning to improve the routing security and efficiency for WSNs. The feasible routing scheme is given for obtaining routing information of routing nodes on the blockchain, which makes the routing information traceable and impossible to tamper with. The reinforcement learning model is used to help routing nodes dynamically select more trusted and efficient routing links. From the experimental results, we can find that even in the routing environment with 50% malicious nodes, our routing scheme still has a good delay performance compared with other routing algorithms. The performance indicators such as energy consumption and throughput also show that our scheme is feasible and effective.
Decentralized Multi-Agent Reinforcement Learning With Networked Agents: Recent Advances Multi-agent reinforcement learning (MARL) has long been a significant research topic in both machine learning and control systems. Recent development of (single-agent) deep reinforcement learning has created a resurgence of interest in developing new MARL algorithms, especially those founded on theoretical analysis. In this paper, we review recent advances on a sub-area of this topic: decentralized MARL with networked agents. In this scenario, multiple agents perform sequential decision-making in a common environment, and without the coordination of any central controller, while being allowed to exchange information with their neighbors over a communication network. Such a setting finds broad applications in the control and operation of robots, unmanned vehicles, mobile sensor networks, and the smart grid. This review covers several of our research endeavors in this direction, as well as progress made by other researchers along the line. We hope that this review promotes additional research efforts in this exciting yet challenging area.
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Randomized algorithms This text by two well-known experts in the field presents the basic concepts in the design and analysis of randomized algorithms at a level accessible to beginning graduate students, professionals and researchers.
A Formal Basis for the Heuristic Determination of Minimum Cost Paths Although the problem of determining the minimum cost path through a graph arises naturally in a number of interesting applications, there has been no underlying theory to guide the development of efficient search procedures. Moreover, there is no adequate conceptual framework within which the various ad hoc search strategies proposed to date can be compared. This paper describes how heuristic information from the problem domain can be incorporated into a formal mathematical theory of graph searching and demonstrates an optimality property of a class of search strategies.
Consensus problems in networks of agents with switching topology and time-delays. In this paper, we discuss consensus problems for a network of dynamic agents with flxed and switching topologies. We analyze three cases: i) networks with switching topology and no time-delays, ii) networks with flxed topology and communication time-delays, and iii) max-consensus problems (or leader determination) for groups of discrete-time agents. In each case, we introduce a linear/nonlinear consensus protocol and provide convergence analysis for the proposed distributed algorithm. Moreover, we establish a connection between the Fiedler eigenvalue of the information ∞ow in a network (i.e. algebraic connectivity of the network) and the negotiation speed (or performance) of the corresponding agreement protocol. It turns out that balanced digraphs play an important role in addressing average-consensus problems. We intro- duce disagreement functions that play the role of Lyapunov functions in convergence analysis of consensus protocols. A distinctive feature of this work is to address consen- sus problems for networks with directed information ∞ow. We provide analytical tools that rely on algebraic graph theory, matrix theory, and control theory. Simulations are provided that demonstrate the efiectiveness of our theoretical results.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
On receding horizon feedback control Receding horizon feedback control (RHFC) was originally introduced as an easy method for designing stable state-feedback controllers for linear systems. Here those results are generalized to the control of nonlinear autonomous systems, and we develop a performance index which is minimized by the RHFC (inverse optimal control problem). Previous results for linear systems have shown that desirable nonlinear controllers can be developed by making the RHFC horizon distance a function of the state. That functional dependence was implicit and difficult to implement on-line. Here we develop similar controllers for which the horizon distance is an easily computed explicit function of the state.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.2
0.2
0.2
0.2
0.2
0.007692
0
0
0
0
0
0
0
0
Rowhammer.js: A remote software-induced fault attack in JavaScript A fundamental assumption in software security is that a memory location can only be modified by processes that may write to this memory location. However, a recent study has shown that parasitic effects in DRAM can change the content of a memory cell without accessing it, but by accessing other memory locations in a high frequency. This so-called Rowhammer bug occurs in most of today's memory modules and has fatal consequences for the security of all affected systems, e.g., privilege escalation attacks. All studies and attacks related to Rowhammer so far rely on the availability of a cache flush instruction in order to cause accesses to DRAM modules at a sufficiently high frequency. We overcome this limitation by defeating complex cache replacement policies. We show that caches can be forced into fast cache eviction to trigger the Rowhammer bug with only regular memory accesses. This allows to trigger the Rowhammer bug in highly restricted and even scripting environments. We demonstrate a fully automated attack that requires nothing but a website with JavaScript to trigger faults on remote hardware. Thereby we can gain unrestricted access to systems of website visitors. We show that the attack works on off-the-shelf systems. Existing countermeasures fail to protect against this new Rowhammer attack.
Exploiting Correcting Codes: On the Effectiveness of ECC Memory Against Rowhammer Attacks Given the increasing impact of Rowhammer, and the dearth of adequate other hardware defenses, many in the security community have pinned their hopes on error-correcting code (ECC) memory as one of the few practical defenses against Rowhammer attacks. Specifically, the expectation is that the ECC algorithm will correct or detect any bits they manage to flip in memory in real-world settings. However, the extent to which ECC really protects against Rowhammer is an open research question, due to two key challenges. First, the details of the ECC implementations in commodity systems are not known. Second, existing Rowhammer exploitation techniques cannot yield reliable attacks in presence of ECC memory. In this paper, we address both challenges and provide concrete evidence of the susceptibility of ECC memory to Rowhammer attacks. To address the first challenge, we describe a novel approach that combines a custom-made hardware probe, Rowhammer bit flips, and a cold boot attack to reverse engineer ECC functions on commodity AMD and Intel processors. To address the second challenge, we present ECCploit, a new Rowhammer attack based on composable, data-controlled bit flips and a novel side channel in the ECC memory controller. We show that, while ECC memory does reduce the attack surface for Rowhammer, ECCploit still allows an attacker to mount reliable Rowhammer attacks against vulnerable ECC memory on a variety of systems and configurations. In addition, we show that, despite the non-trivial constraints imposed by ECC, ECCploit can still be powerful in practice and mimic the behavior of prior Rowhammer exploits.
Towards Evaluating the Robustness of Neural Networks Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%. In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.
TRRespass: Exploiting the Many Sides of Target Row Refresh After a plethora of high-profile RowHammer attacks, CPU and DRAM vendors scrambled to deliver what was meant to be the definitive hardware solution against the RowHammer problem: Target Row Refresh (TRR). A common belief among practitioners is that, for the latest generation of DDR4 systems that are protected by TRR, RowHammer is no longer an issue in practice. However, in reality, very little is known about TRR. How does TRR exactly prevent RowHammer? Which parts of a system are responsible for operating the TRR mechanism? Does TRR completely solve the RowHammer problem or does it have weaknesses? In this paper, we demystify the inner workings of TRR and debunk its security guarantees. We show that what is advertised as a single mitigation mechanism is actually a series of different solutions coalesced under the umbrella term Target Row Refresh. We inspect and disclose, via a deep analysis, different existing TRR solutions and demonstrate that modern implementations operate entirely inside DRAM chips. Despite the difficulties of analyzing in-DRAM mitigations, we describe novel techniques for gaining insights into the operation of these mitigation mechanisms. These insights allow us to build TRRespass, a scalable black-box RowHammer fuzzer that we evaluate on 42 recent DDR4 modules. TRRespass shows that even the latest generation DDR4 chips with in-DRAM TRR, immune to all known RowHammer attacks, are often still vulnerable to new TRR-aware variants of RowHammer that we develop. In particular, TRRespass finds that, on present-day DDR4 modules, RowHammer is still possible when many aggressor rows are used (as many as 19 in some cases), with a method we generally refer to as Many-sided RowHammer. Overall, our analysis shows that 13 out of the 42 modules from all three major DRAM vendors (i.e., Samsung, Micron, and Hynix) are vulnerable to our TRR-aware RowHammer access patterns, and thus one can still mount existing state-of-the-art system-level RowHammer attacks. In addition to DDR4, we also experiment with LPDDR4(X) <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> chips and show that they are susceptible to RowHammer bit flips too. Our results provide concrete evidence that the pursuit of better RowHammer mitigations must continue.
Virtual Platform to Analyze the Security of a System on Chip at Microarchitectural Level The processors (CPUs) embedded in System on Chip (SoC) have to face recent attacks taking advantage of vulnerabilities/features in their microarchitectures to retrieve secret information. Indeed, the increase in complexity of modern CPU and SoC is mainly driven by the seek of performance rather than security. Even if efforts like isolation techniques have been taken to thwart cyberattacks, most mi...
HexPADS: A Platform to Detect "Stealth" Attacks. Current systems are under constant attack from many different sources. Both local and remote attackers try to escalate their privileges to exfiltrate data or to gain arbitrary code execution. While inline defense mechanisms like DEP, ASLR, or stack canaries are important, they have a local, program centric view and miss some attacks. Intrusion Detection Systems IDS use runtime monitors to measure current state and behavior of the system to detect an attack orthogonal to active defenses. Attacks change the execution behavior of a system. Our attack detection system HexPADS detects attacks through divergences from normal behavior using attack signatures. HexPADS collects information from the operating system on runtime performance metrics with measurements from hardware performance counters for individual processes. Cache behavior is a strong indicator of ongoing attacks like rowhammer, side channels, covert channels, or CAIN attacks. Collecting performance metrics across all running processes allows the correlation and detection of these attacks. In addition, HexPADS can mitigate the attacks or significantly reduce their effectiveness with negligible overhead to benign processes.
I See Dead µops: Leaking Secrets via Intel/AMD Micro-Op Caches Modern Intel, AMD, and ARM processors translate complex instructions into simpler internal micro-ops that are then cached in a dedicated on-chip structure called the micro-op cache. This work presents an in-depth characterization study of the micro-op cache, reverse-engineering many undocumented features, and further describes attacks that exploit the micro-op cache as a timing channel to transmit secret information. In particular, this paper describes three attacks – (1) a same thread cross-domain attack that leaks secrets across the user-kernel boundary, (2) a cross-SMT thread attack that transmits secrets across two SMT threads via the micro-op cache, and (3) transient execution attacks that have the ability to leak an unauthorized secret accessed along a misspeculated path, even before the transient instruction is dispatched to execution, breaking several existing invisible speculation and fencing-based solutions that mitigate Spectre.
The geometry of innocent flesh on the bone: return-into-libc without function calls (on the x86) We present new techniques that allow a return-into-libc attack to be mounted on x86 executables that calls no functions at all. Our attack combines a large number of short instruction sequences to build gadgets that allow arbitrary computation. We show how to discover such instruction sequences by means of static analysis. We make use, in an essential way, of the properties of the x86 instruction set.
Accelerating Dependent Cache Misses with an Enhanced Memory Controller. On-chip contention increases memory access latency for multicore processors. We identify that this additional latency has a substantial efect on performance for an important class of latency-critical memory operations: those that result in a cache miss and are dependent on data from a prior cache miss. We observe that the number of instructions between the frst cache miss and its dependent cache miss is usually small. To minimize dependent cache miss latency, we propose adding just enough functionality to dynamically identify these instructions at the core and migrate them to the memory controller for execution as soon as source data arrives from DRAM. This migration allows memory requests issued by our new Enhanced Memory Controller (EMC) to experience a 20% lower latency than if issued by the core. On a set of memory intensive quad-core workloads, the EMC results in a 13% improvement in system performance and a 5% reduction in energy consumption over a system with a Global History Bufer prefetcher, the highest performing prefetcher in our evaluation.
A lightweight infrastructure for graph analytics Several domain-specific languages (DSLs) for parallel graph analytics have been proposed recently. In this paper, we argue that existing DSLs can be implemented on top of a general-purpose infrastructure that (i) supports very fine-grain tasks, (ii) implements autonomous, speculative execution of these tasks, and (iii) allows application-specific control of task scheduling policies. To support this claim, we describe such an implementation called the Galois system. We demonstrate the capabilities of this infrastructure in three ways. First, we implement more sophisticated algorithms for some of the graph analytics problems tackled by previous DSLs and show that end-to-end performance can be improved by orders of magnitude even on power-law graphs, thanks to the better algorithms facilitated by a more general programming model. Second, we show that, even when an algorithm can be expressed in existing DSLs, the implementation of that algorithm in the more general system can be orders of magnitude faster when the input graphs are road networks and similar graphs with high diameter, thanks to more sophisticated scheduling. Third, we implement the APIs of three existing graph DSLs on top of the common infrastructure in a few hundred lines of code and show that even for power-law graphs, the performance of the resulting implementations often exceeds that of the original DSL systems, thanks to the lightweight infrastructure.
Time-varying graphs and dynamic networks The past decade has seen intensive research efforts on highly dynamic wireless and mobile networks (variously called delay-tolerant, disruptivetolerant, challenged, opportunistic, etc) whose essential feature is a possible absence of end-to-end communication routes at any instant. As part of these efforts, a number of important concepts have been identified, based on new meanings of distance and connectivity. The main contribution of this paper is to review and integrate the collection of these concepts, formalisms, and related results found in the literature into a unified coherent framework, called TVG (for timevarying graphs). Besides this definitional work, we connect the various assumptions through a hierarchy of classes of TVGs defined with respect to properties with algorithmic significance in distributed computing. One of these classes coincides with the family of dynamic graphs over which population protocols are defined. We examine the (strict) inclusion hierarchy among the classes. The paper also provides a quick review of recent stochastic models for dynamic networks that aim to enable analytical investigation of the dynamics.
Automatic RTL Test Generation from SystemC TLM Specifications SystemC transaction-level modeling (TLM) is widely used to enable early exploration for both hardware and software designs. It can reduce the overall design and validation effort of complex system-on-chip (SOC) architectures. However, due to lack of automated techniques coupled with limited reuse of validation efforts between abstraction levels, SOC validation is becoming a major bottleneck. This article presents a novel top-down methodology for automatically generating register transfer-level (RTL) tests from SystemC TLM specifications. It makes two important contributions: i) it proposes a method that can automatically generate TLM tests using various coverage metrics, and (ii) it develops a test refinement specification for automatically converting TLM tests to RTL tests in order to reduce overall validation effort. We have developed a tool which incorporates these activities to enable automated RTL test generation from SystemC TLM specifications. Case studies using a router example and a 64-bit Alpha AXP pipelined processor demonstrate that our approach can achieve intended functional coverage of the RTL designs, as well as capture various functional errors and inconsistencies between specifications and implementations.
A 10-Gb/s CDR With an Adaptive Optimum Loop-Bandwidth Calibrator for Serial Communication Links. This paper describes a 10-Gb/s clock-and-data recovery (CDR) with a background optimum loop-bandwidth calibrator. The proposed CDR automatically achieves the minimum-mean-square error between jittery input data and the recovered clock signal by adjusting the bandwidth of a CDR using Kalman filtering theory. A testchip is fabricated in a 0.11 μm CMOS process and the adaptive optimum loop-bandwidth ...
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.042563
0.044267
0.04
0.04
0.04
0.019431
0.006778
0.001139
0.000028
0
0
0
0
0