Query Text
stringlengths
10
40.4k
Ranking 1
stringlengths
12
40.4k
Ranking 2
stringlengths
12
36.2k
Ranking 3
stringlengths
10
36.2k
Ranking 4
stringlengths
13
40.4k
Ranking 5
stringlengths
12
36.2k
Ranking 6
stringlengths
13
36.2k
Ranking 7
stringlengths
10
40.4k
Ranking 8
stringlengths
12
36.2k
Ranking 9
stringlengths
12
36.2k
Ranking 10
stringlengths
12
36.2k
Ranking 11
stringlengths
20
6.21k
Ranking 12
stringlengths
14
8.24k
Ranking 13
stringlengths
28
4.03k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.25
score_4
float64
0
0.25
score_5
float64
0
0.25
score_6
float64
0
0.25
score_7
float64
0
0.24
score_8
float64
0
0.2
score_9
float64
0
0.03
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Real-time correlation for locating systems utilizing heterogeneous computing architectures The usage of locating systems in sports (e.g. soccer) elevates match and training analysis to a new level. By tracking players and balls during matches or training, the performance of players can be analyzed, the training can be adapted and new strategies can be developed. The radio-based RedFIR system equips players and the ball with miniaturized transmitters, while antennas distributed around the playing field receive the transmitted radio signals. A cluster computer processes these signals to determine the exact positions based on the signals' Time Of Arrival (TOA) at the back end. While such a system works well, it is neither scalable nor inexpensive due to the required computing cluster. Also the relatively high power consumption of the GPU-based cluster is suboptimal. Moreover, high speed interconnects between the antennas and the cluster computers introduce additional costs and increase the installation effort. However, a significant portion of the computing performance is not required for the synthesis of the received data, but for the calculation of the unique TOA values of every receiver line. Therefore, in this paper we propose a smart sensor approach: By integrating some intelligence into the antenna (smart antenna), each antenna can correlate the received signal independently of the remaining system and only TOA values are send to the backend. While the idea is quite simple, the question of a well suited computer architecture to fulfill this task inside the smart antenna is more complex. Therefore, we are evaluating embedded architectures, such as FPGAs, ARM cores as well as a many core CPU (Epiphany) for this approach. Thereby, we are able to achieve 50.000 correlations per second in each smart antenna. As a result, the backend becomes lightweight, cheaper interconnects through data reduction are required and the system becomes more scalable, since most processing power is already included in the antenna.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A 90 nm CMOS 16 Gb/s Transceiver for Optical Interconnects Interconnect architectures which leverage high-bandwidth optical channels offer a promising solution to address the increasing chip-to-chip I/O bandwidth demands. This paper describes a dense, high-speed, and low-power CMOS optical interconnect transceiver architecture. Vertical-cavity surface-emitting laser (VCSEL) data rate is extended for a given average current and corresponding reliability le...
Design Of Automotive Vcsel Transmitter With On-Chip Feedforward Optical Power Control We propose a novel 50 Mb/s optical transmitter fabricated in a 0.6 mu m BiCMOS technology for automotive applications. The proposed VCSEL driver chip was designed to operate with a single Supply voltage ranging from 3.0 V to 5.25 V. A fully integrated feedforward current control circuit is presented to stabilize the optical output power without any external components. The experimental results show that the optical output power can be stable within a 1.1 dB range and the extinction ratio greater than 14 dB over the automotive environmental temperature range of -40 degrees C to 105 degrees C.
Design of a 56 Gbit/s 4-level pulse-amplitude-modulation inductor-less vertical-cavity surface-emitting laser driver integrated circuit in 130 nm BiCMOS technology This paper presents the design and analysis of a 4-level pulse-amplitude-modulation (4-PAM) 56 Gbit/s vertical-cavity surface-emitting laser (VCSEL) driver integrated circuit (IC) for short range, high speed and low power optical interconnections. An amplitude modulated signal is necessary to overcome the bottleneck of speed given by the actual VCSELs and decrease the power consumption per bit. A prototype IC is developed in a standard 130 nm BiCMOS technology. The circuit converts two single-ended input signals to a 4-level signal fed to the laser. The driver also provides the DC current and the voltage necessary to bias the VCSEL. The power dissipation of the driver is only 115 mW including both the VCSEL and the 50 Ω input single-to-differential-ended converters. To the author's knowledge this is the first 56 Gbit/s 4-PAM laser driver implemented in silicon with a power dissipation per data-rate (DR) of 2.05 mW/Gbit/s including the VCSEL making it the most power efficient, 56 Gbit/s, common cathode laser driver. The active area occupies 0.056 mm2. The small signal bandwidths are 49 GHz for the high and 43 GHz for the low amplitude amplification path, when the VCSEL is not connected. The bit error rate was tested electrically showing and error free connection at 28 GBaud/s.
Survey of Photonic and Plasmonic Interconnect Technologies for Intra-Datacenter and High-Performance Computing Communications. Large scale data centers (DC) and high performance computing (HPC) systems require more and more computing power at higher energy efficiency. They are already consuming megawatts of power, and a linear extrapolation of trends reveals that they may eventually lead to unrealistic power consumption scenarios in order to satisfy future requirements (e.g., Exascale computing). Conventional complementar...
A Differential Push-Pull Voltage Mode VCSEL Driver in 65-nm CMOS Improving power-conversion efficiency (PCE) of VCSEL drivers is paramount to improve the overall energy efficiency of the entire optical link for high-performance computing and datacenters. VCSEL diodes are normally driven single-ended with pseudo-differential current-mode drivers to maintain signal integrity. However, such conventional drivers consume significant power and are often unable to compensate for supply switching noise due to package parasitics at high data-rates. We propose a differential push-pull voltage-mode VCSEL driver to mitigate bondwire parasitics, reduce power consumption, and leverage CMOS process scaling to its maximum advantage. A proof-of-concept prototype in 65-nm CMOS process achieves the highest ever-reported PCE of 18.7 % for VCSEL drivers when normalized to VCSEL slope efficiency. It uses an asymmetric 3-tap rise and fall-based pre-emphasis to achieve a total energy/bit of 1.52 pJ/b at 16 Gb/s with an average optical power output of 1.34 dBm, OMA of 2.1 dBm, and extinction ratio of 5.92 dB.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
Planning as heuristic search In the AIPS98 Planning Contest, the hsp planner showed that heuristic search planners can be competitive with state-of-the-art Graphplan and sat planners. Heuristic search planners like hsp transform planning problems into problems of heuristic search by automatically extracting heuristics from Strips encodings. They differ from specialized problem solvers such as those developed for the 24-Puzzle and Rubik's Cube in that they use a general declarative language for stating problems and a general mechanism for extracting heuristics from these representations. In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are independent. The heuristic is then used in the context of best-first and hill-climbing search algorithms, and is tested over a large collection of domains. We then consider variations and extensions such as reversing the direction of the search for speeding node evaluation, and extracting information about propositional invariants for avoiding dead-ends. We analyze the resulting planners, evaluate their performance, and explain when they do best. We also compare the performance of these planners with two state-of-the-art planners, and show that the simplest planner based on a pure best-first search yields the most solid performance over a large set of problems. We also discuss the strengths and limitations of this approach, establish a correspondence between heuristic search planning and Graphplan, and briefly survey recent ideas that can reduce the current gap in performance between general heuristic search planners and specialized solvers.
Probabilistic neural networks By replacing the sigmoid activation function often used in neural networks with an exponential function, a probabilistic neural network (PNN) that can compute nonlinear decision boundaries which approach the Bayes optimal is formed. Alternate activation functions having similar properties are also discussed. A fourlayer neural network of the type proposed can map any input pattern to any number of classifications. The decision boundaries can be modified in real-time using new data as they become available, and can be implemented using artificial hardware “neurons” that operate entirely in parallel. Provision is also made for estimating the probability and reliability of a classification as well as making the decision. The technique offers a tremendous speed advantage for problems in which the incremental adaptation time of back propagation is a significant fraction of the total computation time. For one application, the PNN paradigm was 200,000 times faster than back-propagation.
TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones Today’s smartphone operating systems frequently fail to provide users with visibility into how third-party applications collect and share their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid enables realtime analysis by leveraging Android’s virtualized execution environment. TaintDroid incurs only 32% performance overhead on a CPU-bound microbenchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, in our 2010 study we found 20 applications potentially misused users’ private information; so did a similar fraction of the tested applications in our 2012 study. Monitoring the flow of privacy-sensitive data with TaintDroid provides valuable input for smartphone users and security service firms seeking to identify misbehaving applications.
On receding horizon feedback control Receding horizon feedback control (RHFC) was originally introduced as an easy method for designing stable state-feedback controllers for linear systems. Here those results are generalized to the control of nonlinear autonomous systems, and we develop a performance index which is minimized by the RHFC (inverse optimal control problem). Previous results for linear systems have shown that desirable nonlinear controllers can be developed by making the RHFC horizon distance a function of the state. That functional dependence was implicit and difficult to implement on-line. Here we develop similar controllers for which the horizon distance is an easily computed explicit function of the state.
Sensor network gossiping or how to break the broadcast lower bound Gossiping is an important problem in Radio Networks that has been well studied, leading to many important results. Due to strong resouce limitations of sensor nodes, previous solutions are frequently not feasible in Sensor Networks. In this paper, we study the gossiping problem in the restrictive context of Sensor Networks. By exploiting the geometry of sensor node distributions, we present reduced, optimal running time of O(D + Δ) for an algorithm that completes gossiping with high probability in a Sensor Network of unknown topology and adversarial wake-up, where D is the diameter and Δ the maximum degree of the network. Given that an algorithm for gossiping also solves the broadcast problem, our result proves that the classic lower bound of [16] can be broken if nodes are allowed to do preprocessing.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.1
0.1
0.1
0.1
0.1
0
0
0
0
0
0
0
0
0
Direct bandpass sampling of multiple distinct RF signals A goal in the software radio design philosophy is to place the analog-to-digital converter as near the antenna as possible. This objective has been demonstrated for the case of a single input signal. Bandpass sampling has been applied to downconvert, or intentionally alias, the information bandwidth of a radio frequency (RF) signal to a desired intermediate frequency. The design of the software radio becomes more interesting when two or more distinct signals are received. The traditional ap- proach for multiple signals would be to bandpass sample a continuous span of spectrum containing all the desired signals. The disadvantage with this approach is that the sampling rate and associated discrete processing rate are based on the span of spectrum as opposed to the information bandwidths of the signals of interest. Proposed here is a technique to determine the absolute min- imum sampling frequency for direct digitization of multiple, nonadjacent, frequency bands. The entire process is based on the calculation of a single parameter—the sampling frequency. The result is a simple, yet elegant, front-end design for the reception and bandpass sampling of multiple RF signals. Experimental results using RF transmissions from the U.S. Global Positioning System—Standard Position Service (GPS-SPS) and the Russian Global Navigation Satellite System (GLONASS) are used to illustrate and verify the theory.
New Architecture for a Wireless Smart Sensor Based on a Software-Defined Radio. Today, wireless sensor technology is based on monolithic transceivers that optimize cost but have a rigid hardware architecture. In this paper, a new architecture for wireless sensors is presented. It is based on a software-defined radio concept and shows impressive adaptability to external conditions. The proposed architecture, which is called the wireless ultrasmart sensor (WUSS), enables the us...
RF Front-End Concept and Implementation for Direct Sampling of Multiband Signals. The placement of the analog-to-digital converter as near the antenna as possible is a key issue in the software-defined radio receiver design. Direct sampling of the incoming filtered signal is a compact solution enabling channel simultaneity. In this brief, in the context of evenly spaced equal-bandwidth multiband systems, sufficient conditions for the channel allocation assuring that the minimum...
LC-Based Bandpass Continuous-Time Sigma-Delta Modulators With Widely Tunable Notch Frequency This paper analyses the use of bandpass continuous-time ΣΔ modulators with widely programmable notch frequency for the efficient digitization of radio-frequency signals in the next generation of software-defined-radio mobile systems. The modulator architectures under study are based on a fourth-order loop filter - implemented with two LC-based resonators - and a finite-impulsive-response feedback loop in order to increase their flexibility and degrees of freedom. Several topologies are studied, considering three different cases for the embedded digital-to-analog converter, namely: return-to-zero, non-return-to-zero and raised-cosine waveform. In all cases, a notch-aware synthesis methodology is presented, which takes into account the dependency of the loop-filter coefficients on the notch frequency and compensates for the dynamic range degradation due to the variation of the notch. The synthesized modulators are compared in terms of their sensitivity to main circuit error mechanisms and the estimated power consumption over a notch-frequency tuning range of 0.1fs to 0.4fs. Time-domain behavioral and macromodel electrical simulations validate this approach, demonstrating the feasibility of the presented methodology and architectures for the efficient and robust digitization of radio-frequency signals with a scalable resolution and programmable signal bandwidth.
The Design Method and Performance Analysis of RF Subsampling Frontend for SDR/CR Receivers RF subsampling can be used by radio receivers to directly down convert and digitize RF signals. The goal of software-defined radio (SDR) design is to place analog-to-digital converter (ADC) as near the antenna as possible. Based on this, an RF subsampling frontend (FE) for SDR is designed and verified by a hardware platform. The effects of timing jitter, ADC resolution, and folding noise dominating SNR degradation sources in the digital FE were considered. We present an efficient method of SNR measurement and an analysis of its performance. The experimental results indicate that the three degradation sources are sufficient to estimate the performance of the RF subsampling FE, and this conclusion matches the theoretical analysis results.
All-digital TX frequency synthesizer and discrete-time receiver for Bluetooth radio in 130-nm CMOS We present a single-chip fully compliant Bluetooth radio fabricated in a digital 130-nm CMOS process. The transceiver is architectured from the ground up to be compatible with digital deep-submicron CMOS processes and be readily integrated with a digital baseband and application processor. The conventional RF frequency synthesizer architecture, based on the voltage-controlled oscillator and the ph...
A Second-Order Antialiasing Prefilter for a Software-Defined Radio Receiver A new architecture is presented for a sinc2(f) filter intended to sample channels of varying bandwidth when surrounded by blockers and adjacent bands. The sample rate is programmable from 5 to 40 MHz, and aliases are suppressed by 45 dB or more. The noise and linearity performance of the filter is analyzed, and the effects of various imperfections such as transconductor finite output impedance, interchannel gain mismatch, and residual offsets in the channels are studied. Furthermore, it is proved that the filter is robust to the clock jitter. The 0.13- mum CMOS circuit consumes 6 mA from a 1.2-V supply.
Second-order intermodulation mechanisms in CMOS downconverters An in-depth analysis of the mechanisms responsible for second-order intermodulation distortion in CMOS active downconverters is proposed in this paper. The achievable second-order input intercept point (IIP2) has a fundamental limit due to nonlinearity and mismatches in the switching stage and improves with technology scaling. Second-order intermodulation products generated by the input transcondu...
Track-and-Zoom Neural Analog-to-Digital Converter With Blind Stimulation Artifact Rejection Closed-loop neuromodulation for the treatment of neurological disorders requires monitoring of the brain activity uninterruptedly even during neurostimulation. This article presents a bidirectional 32-channel CMOS neural interface that can record neural activity during stimulation. Each channel consists of a dc-coupled <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\Delta ^{2} \Sigma $ </tex-math></inline-formula> -modulated analog-to-digital converter (neural-ADC), which records slow potentials (< 0.1 Hz) while accommodating rail-to-rail dc offset using a spectrum-shaping front-end. This front-end equalizes the neural signal spectrum before signal quantization, which reduces the energy consumption and silicon area. Upon detection of a large artifact by an in-channel event-triggered digital block, the modulator feedback DAC tracks the artifact with step sizes incrementing in a radix-2 exponential form, preventing the neural-ADC from saturation. Upon tracking the artifact, the multi-bit DAC step size is reduced to zoom into the input neural signal at the highest recording resolution. The modulator’s multi-bit DAC is reused in a time-shared fashion as a current-mode stimulator with no area overhead. The <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\Delta ^{2} \Sigma $ </tex-math></inline-formula> -ADC consumes 1.7 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> from 0.6-V/1.2-V digital/analog supplies and time-shares the modulator’s feedback DAC as the multi-bit current-mode stimulator operating at 3.3 V. The ADC occupies a silicon area of 0.023 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> in the 130-nm CMOS and achieves a signal-to-noise-and-distortion ratio (SNDR) of 70 dB over the 500-Hz bandwidth and an equivalent noise efficiency factor (NEF) of 2.86 without a stand-alone front-end amplifier. The 32-channel bidirectionally interfacing prototype is validated in the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in vivo</italic> whole brain of a rodent.
Computing size-independent matrix problems on systolic array processors A methodology to transform dense to band matrices is presented in this paper. This transformation, is accomplished by triangular blocks partitioning, and allows the implementation of solutions to problems with any given size, by means of contraflow systolic arrays, originally proposed by H.T. Kung. Matrix-vector and matrix-matrix multiplications are the operations considered here.The proposed transformations allow the optimal utilization of processing elements (PEs) of the systolic array when dense matrix are operated. Every computation is made inside the array by using adequate feedback. The feedback delay time depends only on the systolic array size.
A tight lower bound on the cover time for random walks on graphs We prove that the expected time for a random walk to cover all n vertices of a graph is at least (1 + o(1))n In n. © 1995 Wiley Periodicals, Inc.
Decision making for cognitive radio equipment: analysis of the first 10 years of exploration. This article draws a general retrospective view on the first 10 years of cognitive radio (CR). More specifically, we explore in this article decision making and learning for CR from an equipment perspective. Thus, this article depicts the main decision making problems addressed by the community as general dynamic configuration adaptation (DCA) problems and discuss the suggested solution proposed in the literature to tackle them. Within this framework dynamic spectrum management is briefly introduced as a specific instantiation of DCA problems. We identified, in our analysis study, three dimensions of constrains: the environment's, the equipment's and the user's related constrains. Moreover, we define and use the notion of a priori knowledge, to show that the tackled challenges by the radio community during first 10 years of CR to solve decision making problems have often the same design space, however they differ by the a priori knowledge they assume available. Consequently, we suggest in this article, the "a priori knowledge" as a classification criteria to discriminate the main proposed techniques in the literature to solve configuration adaptation decision making problems. We finally discuss the impact of sensing errors on the decision making process as a prospective analysis.
Implementation of LTE SC-FDMA on the USRP2 software defined radio platform In this paper we discuss the implementation of a Single Carrier Frequency Division Multiple Access (SC-FDMA) transceiver running over the Universal Software Radio Peripheral 2 (USRP2). SC-FDMA is the air interface which has been selected for the uplink in the latest Long Term Evolution (LTE) standard. In this paper we derive an AWGN channel model for SC-FDMA transmission, which is useful for benchmarking experimental results. In our implementation, we deal with signal scaling, equalization and partial synchronization to realize SC-FDMA transmission over a noisy channel at rates up to 5.184 Mbit/s. Experimental results on the Bit Error Rate (BER) versus Signal-to-Noise Ratio (SNR) are presented and compared to theoretical and simulated performance.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.009355
0.009167
0.008333
0.008333
0.004167
0.001367
0.000082
0.000013
0.000001
0
0
0
0
0
A comprehensive review on type 2 fuzzy logic applications: Past, present and future In this paper a concise overview of the work that has been done by various researchers in the area of type-2 fuzzy logic is analyzed and discussed. Type-2 fuzzy systems have been widely applied in the fields of intelligent control, pattern recognition and classification, among others. The overview mainly focuses on past, present and future trends of type-2 fuzzy logic applications. Of utmost importance is the last part, outlining possible areas of applied research in type-2 FL in the future. The major contribution of the paper is briefing of the most relevant work in the area of type-2 fuzzy logic, including its theoretical and practical implications. As well as envisioning possible future works and trends in this area of research. We believe that this paper will provide a good platform for people interested in this area for their future research work.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Stability of switched positive linear systems with average dwell time switching. In this paper, the stability analysis problem for a class of switched positive linear systems (SPLSs) with average dwell time switching is investigated. A multiple linear copositive Lyapunov function (MLCLF) is first introduced, by which the sufficient stability criteria in terms of a set of linear matrix inequalities, are given for the underlying systems in both continuous-time and discrete-time contexts. The stability results for the SPLSs under arbitrary switching, which have been previously studied in the literature, can be easily obtained by reducing MLCLF to the common linear copositive Lyapunov function used for the system under arbitrary switching those systems. Finally, a numerical example is given to show the effectiveness and advantages of the proposed techniques.
Output tracking control for a class of continuous-time T-S fuzzy systems This paper investigates the problem of output tracking for nonlinear systems with actuator fault using interval type-2 (IT2) fuzzy model approach. An IT2 state-feedback fuzzy controller is designed to perform the tracking control problem, where the membership functions can be freely chosen since the number of fuzzy rules is different from that of the IT2 T-S fuzzy model. Based on Lyapunov stability theory, an existence condition of IT2 fuzzy H ∞ output tracking controller is obtained to guarantee that the output of the closed-loop IT2 control system can track the output of a given reference model well in the H ∞ sense. Finally, two illustrative examples are given to demonstrate the effectiveness and merits of the proposed design techniques.
Adaptive Fault-Tolerant Tracking Control for Discrete-Time Multiagent Systems via Reinforcement Learning Algorithm This article investigates the adaptive fault-tolerant tracking control problem for a class of discrete-time multiagent systems via a reinforcement learning algorithm. The action neural networks (NNs) are used to approximate unknown and desired control input signals, and the critic NNs are employed to estimate the cost function in the design procedure. Furthermore, the direct adaptive optimal controllers are designed by combining the backstepping technique with the reinforcement learning algorithm. Comparing the existing reinforcement learning algorithm, the computational burden can be effectively reduced by using the method of less learning parameters. The adaptive auxiliary signals are established to compensate for the influence of the dead zones and actuator faults on the control performance. Based on the Lyapunov stability theory, it is proved that all signals of the closed-loop system are semiglobally uniformly ultimately bounded. Finally, some simulation results are presented to illustrate the effectiveness of the proposed approach.
Robust fuzzy tracking control for robotic manipulators In this paper, a stable adaptive fuzzy-based tracking control is developed for robot systems with parameter uncertainties and external disturbance. First, a fuzzy logic system is introduced to approximate the unknown robotic dynamics by using adaptive algorithm. Next, the effect of system uncertainties and external disturbance is removed by employing an integral sliding mode control algorithm. Consequently, a hybrid fuzzy adaptive robust controller is developed such that the resulting closed-loop robot system is stable and the trajectory tracking performance is guaranteed. The proposed controller is appropriate for the robust tracking of robotic systems with system uncertainties. The validity of the control scheme is shown by computer simulation of a two-link robotic manipulator.
A Survey of Reachability and Controllability for Positive Linear Systems. This paper is a survey of reachability and controllability results for discrete-time positive linear systems. It presents a variety of criteria in both algebraic and digraph forms for recognising these fundamental system properties with direct implications not only in dynamic optimization problems (such as those arising in inventory and production control, manpower planning, scheduling and other areas of operations research) but also in studying properties of reachable sets, in feedback control problems, and others. The paper highlights the intrinsic combinatorial structure of reachable/controllable positive linear systems and reveals the monomial components of such systems. The system matrix decomposition into monomial components is demonstrated by solving some illustrative examples.
GloMoSim: a library for parallel simulation of large-scale wireless networks Abstract Anumber,of library-based parallel ,and sequential network,simulators ,have ,been ,designed. This paper describes a library, called GloMoSim (for Global Mobile system Simulator), for parallel simulation of wireless networks. GloMoSim has been designed to be ,extensible and composable: the communication ,protocol stack for wireless networks is divided into a set of layers, each with its own API. Models of protocols at one layer interact with those at a lower (or higher) layer only via these APIs. The modular,implementation,enables consistent comparison,of multiple,protocols ,at a ,given ,layer. The parallel implementation,of GloMoSim ,can be executed ,using a variety of conservative synchronization protocols, which include,the ,null ,message ,and ,conditional ,event algorithms. This paper describes the GloMoSim library, addresses,a number ,of issues ,relevant ,to its parallelization, and presents a set of experimental results onthe IBM 9076 SP, a distributed memory multi- computer. These experiments use models constructed from the library modules. 1,Introduction The,rapid ,advancement ,in portable ,computing platforms and wireless communication,technology has led tosignificant interest in mobile ,computing ,and mobile networking. Two primary forms of mobile ,computing ,are becoming popular: first, mobile computers continue to heavily use wired network infrastructures.Instead of being hardwired to a single location (or IP address), a computer can,dynamically ,move ,to multiple ,locations ,while maintaining,application transparency. Protocols such as
TAG: a Tiny AGgregation service for ad-hoc sensor networks We present the Tiny AGgregation (TAG) service for aggregation in low-power, distributed, wireless environments. TAG allows users to express simple, declarative queries and have them distributed and executed efficiently in networks of low-power, wireless sensors. We discuss various generic properties of aggregates, and show how those properties affect the performance of our in network approach. We include a performance study demonstrating the advantages of our approach over traditional centralized, out-of-network methods, and discuss a variety of optimizations for improving the performance and fault tolerance of the basic solution.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
The Quadrature LC Oscillator: A Complete Portrait Based on Injection Locking We show that the quadrature LC oscillator is best treated as two strongly coupled, nominally identical oscillators that are locked to the same frequency. Differential equations that extend Adler&#39;s description of locking to strong injection reveal the full dynamics of this circuit. With a simplifying insight, the analysis reveals all the modes of the oscillator, their stability, the effects of mism...
Permanent-magnets linear actuators applicability in automobile active suspensions Significant improvements in automobile suspension performance are achieved by active systems. However, current active suspension systems are too expensive and complex. Developments occurring in power electronics, permanent magnet materials, and microelectronic systems justifies analysis of the possibility of implementing electromagnetic actuators in order to improve the performance of automobile suspension systems without excessively increasing complexity and cost. In this paper, the layouts of hydraulic and electromagnetic active suspensions are compared. The actuator requirements are calculated, and some experimental results proving that electromagnetic suspension could become a reality in the future are shown.
SPECS: A Lightweight Runtime Mechanism for Protecting Software from Security-Critical Processor Bugs Processor implementation errata remain a problem, and worse, a subset of these bugs are security-critical. We classified 7 years of errata from recent commercial processors to understand the magnitude and severity of this problem, and found that of 301 errata analyzed, 28 are security-critical. We propose the SECURITY-CRITICAL PROCESSOR ER- RATA CATCHING SYSTEM (SPECS) as a low-overhead solution to this problem. SPECS employs a dynamic verification strategy that is made lightweight by limiting protection to only security-critical processor state. As a proof-of- concept, we implement a hardware prototype of SPECS in an open source processor. Using this prototype, we evaluate SPECS against a set of 14 bugs inspired by the types of security-critical errata we discovered in the classification phase. The evaluation shows that SPECS is 86% effective as a defense when deployed using only ISA-level state; incurs less than 5% area and power overhead; and has no software run-time overhead.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.2
0.2
0.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
Real time front-end for cognitive radio inspired by the human cochlea In this paper we discuss the real time implementation and development of a front-end that is able to sample RF signals with a large bandwidth and dynamic range. This front-end uses an 8 channel RF multiplexer sampled by an 8 channel ADC board. A FPGA board is used to control the ADC and implements the digital synthesis filter bank. Preliminary results show that is possible to reconstruct the input signal.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An Efficient VLSI Architecture of a Reconfigurable Pulse-Shaping FIR Interpolation This brief proposes a two-step optimization technique for designing a reconfigurable VLSI architecture of an interpolation filter for multistandard digital up converter (DUC) to reduce the power and area consumption. The proposed technique initially reduces the number of multiplications per input sample and additions per input sample by 83% in comparison with individual implementation of each stan...
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Deep Reinforcement Learning-Based Multi-Optimality Routing Scheme For Dynamic Iot Networks With the development of Internet of Things (IoT) and 5G technologies, more and more applications, such as autonomous vehicles and tele-medicine, become more sensitive to network latency and accuracy, which require routing schemes to be more flexible and efficient. In order to meet such urgent need, learning-based routing strategies are emerging as strong candidate solutions, with the advantages of high flexibility and accuracy. These strategies can be divided into two categories, centralized and distributed, enjoying the advantages of high precision and high efficiency, respectively. However, routing becomes more complex in dynamic IoT network, where the link connections and access states are time-varying, hence these learning-based routing mechanisms are required to have the capability to adapt to network changes in real time. In this paper, we designed and implemented both centralized and distributed Reinforcement Learning-based Routing schemes combined with Multi-optimality routing criteria (RLR-M). By conducing a series of experiments, we performed a comprehensive analysis of the results and arrived at the conclusion that the centralized is better suited to cope with dynamic networks due to its faster reconvergence (2.2 x over distributed), while the distributed is better positioned to handle with large-scale networks through its high scalability (1.6 x over centralized). Moreover, the multi-optimality routing scheme is implemented through model fusion, which is more flexible than traditional strategies and as such is better placed to meet the needs of IoT.
Enhancing peer-to-peer content discovery techniques over mobile ad hoc networks Content dissemination over mobile ad hoc networks (MANETs) is usually performed using peer-to-peer (P2P) networks due to its increased resiliency and efficiency when compared to client-server approaches. P2P networks are usually divided into two types, structured and unstructured, based on their content discovery strategy. Unstructured networks use controlled flooding, while structured networks use distributed indexes. This article evaluates the performance of these two approaches over MANETs and proposes modifications to improve their performance. Results show that unstructured protocols are extremely resilient, however they are not scalable and present high energy consumption and delay. Structured protocols are more energy-efficient, however they have a poor performance in dynamic environments due to the frequent loss of query messages. Based on those observations, we employ selective forwarding to decrease the bandwidth consumption in unstructured networks, and introduce redundant query messages in structured P2P networks to increase their success ratio.
Reducing query overhead through route learning in unstructured peer-to-peer network In unstructured peer-to-peer networks, such as Gnutella, peers propagate query messages towards the resource holders by flooding them through the network. This is, however, a costly operation since it consumes node and link resources excessively and often unnecessarily. There is no reason, for example, for a peer to receive a query message if the peer has no matching resource or is not on the path to a peer holding a matching resource. In this paper, we present a solution to this problem, which we call Route Learning, aiming to reduce query traffic in unstructured peer-to-peer networks. In Route Learning, peers try to identify the most likely neighbors through which replies can be obtained to submitted queries. In this way, a query is forwarded only to a subset of the neighbors of a peer, or it is dropped if no neighbor, likely to reply, is found. The scheme also has mechanisms to cope with variations in user submitted queries, like changes in the keywords. The scheme can also evaluate the route for a query for which it is not trained. We show through simulation results that when compared to a pure flooding based querying approach, our scheme reduces bandwidth overhead significantly without sacrificing user satisfaction.
A Trusted Routing Scheme Using Blockchain and Reinforcement Learning for Wireless Sensor Networks. A trusted routing scheme is very important to ensure the routing security and efficiency of wireless sensor networks (WSNs). There are a lot of studies on improving the trustworthiness between routing nodes, using cryptographic systems, trust management, or centralized routing decisions, etc. However, most of the routing schemes are difficult to achieve in actual situations as it is difficult to dynamically identify the untrusted behaviors of routing nodes. Meanwhile, there is still no effective way to prevent malicious node attacks. In view of these problems, this paper proposes a trusted routing scheme using blockchain and reinforcement learning to improve the routing security and efficiency for WSNs. The feasible routing scheme is given for obtaining routing information of routing nodes on the blockchain, which makes the routing information traceable and impossible to tamper with. The reinforcement learning model is used to help routing nodes dynamically select more trusted and efficient routing links. From the experimental results, we can find that even in the routing environment with 50% malicious nodes, our routing scheme still has a good delay performance compared with other routing algorithms. The performance indicators such as energy consumption and throughput also show that our scheme is feasible and effective.
Decentralized Multi-Agent Reinforcement Learning With Networked Agents: Recent Advances Multi-agent reinforcement learning (MARL) has long been a significant research topic in both machine learning and control systems. Recent development of (single-agent) deep reinforcement learning has created a resurgence of interest in developing new MARL algorithms, especially those founded on theoretical analysis. In this paper, we review recent advances on a sub-area of this topic: decentralized MARL with networked agents. In this scenario, multiple agents perform sequential decision-making in a common environment, and without the coordination of any central controller, while being allowed to exchange information with their neighbors over a communication network. Such a setting finds broad applications in the control and operation of robots, unmanned vehicles, mobile sensor networks, and the smart grid. This review covers several of our research endeavors in this direction, as well as progress made by other researchers along the line. We hope that this review promotes additional research efforts in this exciting yet challenging area.
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Randomized algorithms This text by two well-known experts in the field presents the basic concepts in the design and analysis of randomized algorithms at a level accessible to beginning graduate students, professionals and researchers.
A Formal Basis for the Heuristic Determination of Minimum Cost Paths Although the problem of determining the minimum cost path through a graph arises naturally in a number of interesting applications, there has been no underlying theory to guide the development of efficient search procedures. Moreover, there is no adequate conceptual framework within which the various ad hoc search strategies proposed to date can be compared. This paper describes how heuristic information from the problem domain can be incorporated into a formal mathematical theory of graph searching and demonstrates an optimality property of a class of search strategies.
Consensus problems in networks of agents with switching topology and time-delays. In this paper, we discuss consensus problems for a network of dynamic agents with flxed and switching topologies. We analyze three cases: i) networks with switching topology and no time-delays, ii) networks with flxed topology and communication time-delays, and iii) max-consensus problems (or leader determination) for groups of discrete-time agents. In each case, we introduce a linear/nonlinear consensus protocol and provide convergence analysis for the proposed distributed algorithm. Moreover, we establish a connection between the Fiedler eigenvalue of the information ∞ow in a network (i.e. algebraic connectivity of the network) and the negotiation speed (or performance) of the corresponding agreement protocol. It turns out that balanced digraphs play an important role in addressing average-consensus problems. We intro- duce disagreement functions that play the role of Lyapunov functions in convergence analysis of consensus protocols. A distinctive feature of this work is to address consen- sus problems for networks with directed information ∞ow. We provide analytical tools that rely on algebraic graph theory, matrix theory, and control theory. Simulations are provided that demonstrate the efiectiveness of our theoretical results.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
On receding horizon feedback control Receding horizon feedback control (RHFC) was originally introduced as an easy method for designing stable state-feedback controllers for linear systems. Here those results are generalized to the control of nonlinear autonomous systems, and we develop a performance index which is minimized by the RHFC (inverse optimal control problem). Previous results for linear systems have shown that desirable nonlinear controllers can be developed by making the RHFC horizon distance a function of the state. That functional dependence was implicit and difficult to implement on-line. Here we develop similar controllers for which the horizon distance is an easily computed explicit function of the state.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.2
0.2
0.2
0.2
0.2
0.007692
0
0
0
0
0
0
0
0
A 0.024mm2 8b 400MS/s SAR ADC with 2b/cycle and resistive DAC in 65nm CMOS.
A 6.2mW 7b 3.5GS/s time interleaved 2-stage pipelined ADC in 40nm CMOS A 7b time interleaved hybrid ADC in 40nm CMOS is presented. The ADC consists of two pipelined stages and combines an intrinsically linear SAR with a fully calibrated binary search architecture to achieve energy efficiency. The first stage of each channel consists of a 3b SAR followed by a dynamic amplifier merged with a comparator. The second stage is a 3b comparator-based asynchronous binary search with threshold calibration to compensate amplifier nonlinearity. The calibration references are generated on chip by using the DAC embedded in the first stage. The prototype achieves a peak SNDR of 38dB at 3.5GS/s while consuming approximately 6.2mW.
A 4.5-mW 8-b 750-MS/s 2-b/step asynchronous subranged SAR ADC in 28-nm CMOS technology A 8-b 2-b/step asynchronous subranged SAR ADC is presented. It incorporates subranging technique to obtain fast reference settling for MSB conversion. The capacitive interpolation reduces number of NMOS switches and lowers matching requirement of a resistive DAC. The proposed timing scheme avoids the need of specific duty cycle of external clock for defining sampling period in a conventional asynchronous SAR ADC. Operating at 750 MS/s, this ADC consumes 4.5 mW from 1-V supply, achieves ENOB of 7.2 and FOM of 41 fJ/conversion-step. It is fabricated in 28-nm CMOS technology and occupies an active area of 0.004 mm2.
A 2.2mW 5b 1.75GS/s Folding Flash ADC in 90nm Digital CMOS
A 10b 100MS/s 1.13mW SAR ADC with binary-scaled error compensation This paper presents a 10 b SAR ADC with a binary-scaled error compensation technique. The prototype occupies an active area of 155 × 165 ¿m2 in 65 nm CMOS. At 100 MS/S, the ADC achieves an SNDR of 59.0 dB and an SFDR of 75.6 dB, while consuming 1.13 mW from a 1.2 V supply. The FoM is 15.5 fJ/conversion-step.
A Polynomial-Based Time-Varying Filter Structure for the Compensation of Frequency-Response Mismatch Errors in Time-Interleaved ADCs This paper introduces a structure for the compensation of frequency-response mismatch errors in M-channel time-interleaved analog-to-digital converters (ADCs). It makes use of a number of fixed digital filters, approximating differentiators of different orders, and a few variable multipliers that correspond to parameters in polynomial models of the channel frequency responses. Whenever the channel frequency responses change, which occurs from time to time in a practical time-interleaved ADC, it suffices to alter the values of these variable multipliers. In this way, expensive on-line filter design is avoided. The paper includes several design examples that illustrate the properties and capabilities of the proposed structure.
Column-oriented database systems Column-oriented database systems (column-stores) have attracted a lot of attention in the past few years. Column-stores, in a nutshell, store each database table column separately, with attribute values belonging to the same column stored contiguously, compressed, and densely packed, as opposed to traditional database systems that store entire records (rows) one after the other. Reading a subset of a table's columns becomes faster, at the potential expense of excessive disk-head seeking from column to column for scattered reads or updates. After several dozens of research papers and at least a dozen of new column-store start-ups, several questions remain. Are these a new breed of systems or simply old wine in new bottles? How easily can a major row-based system achieve column-store performance? Are column-stores the answer to effortlessly support large-scale data-intensive applications? What are the new, exciting system research problems to tackle? What are the new applications that can be potentially enabled by column-stores? In this tutorial, we present an overview of column-oriented database system technology and address these and other related questions.
Data reorganization in memory using 3D-stacked DRAM In this paper we focus on common data reorganization operations such as shuffle, pack/unpack, swap, transpose, and layout transformations. Although these operations simply relocate the data in the memory, they are costly on conventional systems mainly due to inefficient access patterns, limited data reuse and roundtrip data traversal throughout the memory hierarchy. This paper presents a two pronged approach for efficient data reorganization, which combines (i) a proposed DRAM-aware reshape accelerator integrated within 3D-stacked DRAM, and (ii) a mathematical framework that is used to represent and optimize the reorganization operations. We evaluate our proposed system through two major use cases. First, we demonstrate the reshape accelerator in performing a physical address remapping via data layout transform to utilize the internal parallelism/locality of the 3D-stacked DRAM structure more efficiently for general purpose workloads. Then, we focus on offloading and accelerating commonly used data reorganization routines selected from the Intel Math Kernel Library package. We evaluate the energy and performance benefits of our approach by comparing it against existing optimized implementations on state-of-the-art GPUs and CPUs. For the various test cases, in-memory data reorganization provides orders of magnitude performance and energy efficiency improvements via low overhead hardware.
MapGraph: A High Level API for Fast Development of High Performance Graph Analytics on GPUs High performance graph analytics are critical for a long list of application domains. In recent years, the rapid advancement of many-core processors, in particular graphical processing units (GPUs), has sparked a broad interest in developing high performance parallel graph programs on these architectures. However, the SIMT architecture used in GPUs places particular constraints on both the design and implementation of the algorithms and data structures, making the development of such programs difficult and time-consuming. We present MapGraph, a high performance parallel graph programming framework that delivers up to 3 billion Traversed Edges Per Second (TEPS) on a GPU. MapGraph provides a high-level abstraction that makes it easy to write graph programs and obtain good parallel speedups on GPUs. To deliver high performance, MapGraph dynamically chooses among different scheduling strategies depending on the size of the frontier and the size of the adjacency lists for the vertices in the frontier. In addition, a Structure Of Arrays (SOA) pattern is used to ensure coalesced memory access. Our experiments show that, for many graph analytics algorithms, an implementation, with our abstraction, is up to two orders of magnitude faster than a parallel CPU implementation and is comparable to state-of-the-art, manually optimized GPU implementations. In addition, with our abstraction, new graph analytics can be developed with relatively little effort.
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
On classification with incomplete data. We address the incomplete-data problem in which feature vectors to be classified are missing data (features). A (supervised) logistic regression algorithm for the classification of incomplete data is developed. Single or multiple imputation for the missing data is avoided by performing analytic integration with an estimated conditional density function (conditioned on the observed data). Conditional density functions are estimated using a Gaussian mixture model (GMM), with parameter estimation performed using both Expectation-Maximization (EM) and Variational Bayesian EM (VB-EM). The proposed supervised algorithm is then extended to the semisupervised case by incorporating graph-based regularization. The semisupervised algorithm utilizes all available data-both incomplete and complete, as well as labeled and unlabeled. Experimental results of the proposed classification algorithms are shown.
Study of Subharmonically Injection-Locked PLLs A complete analysis on subharmonically injection-locked PLLs develops fundamental theory for subharmonic locking phenomenon. It explains the noise shaping phenomenon, locking range and behavior, PVT tolerance, and pseudo locking issue. All of the analyses are verified by real chip measurements. Two 20-GHz PLLs based on the proposed theory are designed and fabricated in 90-nm CMOS technology to dem...
An emergency communication system based on software-defined radio. Wireless telecommunications represent an important asset for public protection and disaster relief (PPDR) organizations as they improve the coordination and the distribution of information among first responders in the field. In large international disaster scenarios, many different PPDR organizations may participate to the response phase of disaster management. In this context, PPDR organizations may use different wireless communication technologies; such diversity may create interoperability barriers and degrade the coordination among first-time responders. In this paper, we present the design, the integration, and the testing of a demonstration system based on software-defined radio (SDR) technology and software communication architecture (SCA) to support PPDR operations with special focus on the provision of satellite communications. This paper describes the main components of the demonstration system, the integration activities as well as the testing scenarios, which were used to evaluate the technical feasibility. The paper also describes the main technical challenges in the implementation and integration of the demonstration system. Finally, future developments for this technology and potential deployment challenges are presented.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.05184
0.05
0.025955
0.012744
0.006355
0.000227
0.000054
0.000007
0.000001
0
0
0
0
0
Improved Switched System Approach to Networked Control Systems With Time-Varying Delays An improved switched system approach is proposed for the stability and stabilization of networked control systems with time-varying delays. The approach features a mode-dependent state feedback controller guaranteeing the exponential stability of the closed-loop system, which is implemented within the packet-based control framework. The effectiveness of the proposed approach is finally demonstrated experimentally by a networked inverted pendulum control system.
Offset-Free Model Predictive Control for the Power Control of Three-Phase AC/DC Converters This paper describes an offset-free model predictive control (MPC) algorithm using a disturbance observer (DOB) to control the active/reactive powers of a three-phase AC/DC converter. The strategy of this paper is twofold. One is the use of DOB to remove the offset error and the other is the proper choice of the weighting matrices of a cost index to provide fast error decay with small overshoot. The DOB is designed to estimate the unknown disturbances of the AC/DC converter following the standard Luenberger observer design procedure. The proposed MPC minimizes a one-step-ahead cost index penalizing the predicted tracking error by performing a simple membership test without any use of numerical methods. A systematic way for choosing the weights of the cost index, which guarantees the global stability of the closed-loop system, is proposed. Use of the DOB eliminates the offset tracking errors in the real implementation. Using a 25 􀀀 kW AC/DC converter, it is experimentally shown that the proposed MPC enhances the power tracking performance while considerably reducing the mutual interference of the active/reactive powers as well as the output voltage.
Further results on cloud control systems. This paper is devoted to further investigating the cloud control systems (CCSs). The benefits and challenges of CCSs are provided. Both new research results of ours and some typical work made by other researchers are presented. It is believed that the CCSs can have huge and promising effects due to their potential advantages.
Moving Horizon Estimation for Mobile Robots With Multirate Sampling. This paper investigates the multirate moving horizon estimation (MMHE) problem for mobile robots with inertial sensor and camera, where the sampling rates of the sensors are not identical. In the sense of the multirate systems, some sensors may have no measurements at certain sampling times, which can be regarded as measurement missing and may significantly degrade the estimation performance. A bi...
Real-Time Switched Model Predictive Control for a Cyber-Physical Wind Turbine Emulator The high complexity and nonlinearity of wind turbine (WT) systems impose the utilization of rigorous control methods such as model predictive control (MPC). MPC algorithms are computationally intensive requiring investigation of real-time implementability and feasibility in addition to control performance metrics. In this article, a switched model predictive controller (SMPC) is developed, implemented, and investigated for control objectives performance and real-time metrics. This article has two main contributions. First, embedded real-time SMPC is developed using qpOASES as an embedded solver for the online optimal control problem. Second, a cyber-physical real-time emulator for variable-speed variable-pitch utility-scale WT is developed and implemented on an xPC target machine using a high-fidelity linear parameter-varying model. The SMPC is evaluated on the fatigue, aerodynamics, structures, and turbulence (FAST) design code and the real-time emulator. The analysis and investigation of results highlight the feasibility and capability of SMPC for handling control objectives of WT systems within real-time using short control periods.
A New Delay-Compensation Scheme for Networked Control Systems in Controller Area Networks. In this work, we aim to study a new delay-compensation algorithm for networked control systems (NCSs) which are connected via the controller area network (CAN) buses. First, we analyze the property of CAN bus and find the main sources of CAN-bus-induced delays. The system controlled through a CAN bus is formulated into the typical framework of NCSs. Different from the traditional state feedback or...
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Distributed estimation and quantization An algorithm is developed for the design of a nonlinear, n-sensor, distributed estimation system subject to communication and computation constraints. The algorithm uses only bivariate probability distributions and yields locally optimal estimators that satisfy the required system constraints. It is shown that the algorithm is a generalization of the classical Lloyd-Max results
Supporting Aggregate Queries Over Ad-Hoc Wireless Sensor Networks We show how the database community's notion of a generic query interface for data aggregation can be applied to ad-hoc networks of sensor devices. As has been noted in the sensor network literature, aggregation is important as a data reduction tool; networking approaches, however, have focused on application specific solutions, whereas our in-network aggregation approach is driven by a general purpose, SQL-style interface that can execute queries over any type of sensor data while providing opportunities for significant optimization. We present a variety of techniques to improve the reliability and performance of our solution. We also show how grouped aggregates can be efficiently computed and offer a comparison to related systems and database projects.
Exploiting ILP, TLP, and DLP with the polymorphous TRIPS architecture This paper describes the polymorphous TRIPS architecture which can be configured for different granularities and types of parallelism. TRIPS contains mechanisms that enable the processing cores and the on-chip memory system to be configured and combined in different modes for instruction, data, or thread-level parallelism. To adapt to small and large-grain concurrency, the TRIPS architecture contains four out-of-order, 16-wide-issue Grid Processor cores, which can be partitioned when easily extractable fine-grained parallelism exists. This approach to polymorphism provides better performance across a wide range of application types than an approach in which many small processors are aggregated to run workloads with irregular parallelism. Our results show that high performance can be obtained in each of the three modes--ILP, TLP, and DLP-demonstrating the viability of the polymorphous coarse-grained approach for future microprocessors.
A 10-Gb/s CMOS clock and data recovery circuit with a half-rate binary phase/frequency detector A 10-Gb/s phase-locked clock and data recovery circuit incorporates a multiphase LC oscillator and a half-rate phase/frequency detector with automatic data retiming. Fabricated in 0.18-μm CMOS technology in an area of 1.75×1.55 mm2, the circuit exhibits a capture range of 1.43 GHz, an rms jitter of 0.8 ps, a peak-to-peak jitter of 9.9 ps, and a bit error rate of 10-9 with a pseudorandom bit sequence of 223-1. The power dissipation excluding the output buffers is 91 mW from a 1.8-V supply.
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
Current-mode adaptively hysteretic control for buck converters with fast transient response and improved output regulation This paper presents a current-mode adaptively hysteretic control (CMAHC) technique to achieve the fast transient response for DC-DC buck converters. A complementary full range current sensor comprising of chargingpath and discharging-path sensing transistors is proposed to track the inductor current seamlessly. With the proposed current-mode adaptively hysteretic topology, the inductor current is continuously monitored, and the adaptively hysteretic threshold is dynamically adjusted according to the feedback information comes from the output voltage level. Therefore, a fast load-transient response can be achieved. Besides, the output regulation performance is also improved by the proposed dynamic current-scaling circuitry (DCSC). Moreover, the proposed CMAHC topology can be used in a nearly zero RESR design configuration. The prototype fabricated using TSMC 0.25μm CMOS process occupies the area of 1.78mm2 including all bonding pads. Experimental results show that the output voltage ripple is smaller than 30mV over a wide loading current from 0 mA to 500 mA with maximum power conversion efficiency higher than 90%. The recovery time from light to heavy load (100 to 500 mA) is smaller than 5μs.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Dead-beat terminal sliding-mode control: A guaranteed attractiveness approach This paper presents a design method of terminal sliding mode control with the guaranteed attractiveness, for which the tracking performance of the closed-loop system undertaken is governed by a discretized finite-time system (FTS). To alleviate chattering effectively, this paper conducts the controller derivation according to the dead-beat reaching law that defines the sliding variable with the smoothing exponential-rate FTS in discrete-time. The closed-loop performance improvement is accomplished by applying a finite-difference uncertainty observer, and embedding the measure of uncertainty compensation in the error dynamics. The steady-state error band, absolute attracting layer and the monotone decreasing region of the error dynamics are derived in detail. The experimental result verifies not only the ease and efficiency in controller design, by the discretized FTS approach, but also the attractiveness and robustness properties assured in the closed-loop system.
A Probabilistic Neural-Fuzzy Learning System for Stochastic Modeling A probabilistic fuzzy neural network (PFNN) with a hybrid learning mechanism is proposed to handle complex stochastic uncertainties. Fuzzy logic systems (FLSs) are well known for vagueness processing. Embedded with the probabilistic method, an FLS will possess the capability to capture stochastic uncertainties. Further enhanced with the neural learning, it will be able to work under time-varying stochastic environment. Integrated with a statistical process control (SPC) based monitoring method, the PFNN can maintain the robust modeling performance. Finally, the successful simulation demonstrates the modeling effectiveness of the proposed PFNN under the time-varying stochastic conditions.
Design of Fuzzy-Neural-Network-Inherited Backstepping Control for Robot Manipulator Including Actuator Dynamics This study presents the design and analysis of an intelligent control system that inherits the systematic and recursive design methodology for an n-link robot manipulator, including actuator dynamics, in order to achieve a high-precision position tracking with a firm stability and robustness. First, the coupled higher order dynamic model of an n-link robot manipulator is introduced briefly. Then, a conventional backstepping control (BSC) scheme is developed for the joint position tracking of the robot manipulator. Moreover, a fuzzy-neural-network-inherited BSC (FNNIBSC) scheme is proposed to relax the requirement of detailed system information to improve the robustness of BSC and to deal with the serious chattering that is caused by the discontinuous function. In the FNNIBSC strategy, the FNN framework is designed to mimic the BSC law, and adaptive tuning algorithms for network parameters are derived in the sense of the projection algorithm and Lyapunov stability theorem to ensure the network convergence as well as stable control performance. Numerical simulations and experimental results of a two-link robot manipulator that are actuated by dc servomotors are provided to justify the claims of the proposed FNNIBSC system, and the superiority of the proposed FNNIBSC scheme is also evaluated by quantitative comparison with previous intelligent control schemes.
Reactive Power Control of Three-Phase Grid-Connected PV System During Grid Faults Using Takagi–Sugeno–Kang Probabilistic Fuzzy Neural Network Control An intelligent controller based on the Takagi-Sugeno-Kang-type probabilistic fuzzy neural network with an asymmetric membership function (TSKPFNN-AMF) is developed in this paper for the reactive and active power control of a three-phase grid-connected photovoltaic (PV) system during grid faults. The inverter of the three-phase grid-connected PV system should provide a proper ratio of reactive power to meet the low-voltage ride through (LVRT) regulations and control the output current without exceeding the maximum current limit simultaneously during grid faults. Therefore, the proposed intelligent controller regulates the value of reactive power to a new reference value, which complies with the regulations of LVRT under grid faults. Moreover, a dual-mode operation control method of the converter and inverter of the three-phase grid-connected PV system is designed to eliminate the fluctuation of dc-link bus voltage under grid faults. Furthermore, the network structure, the online learning algorithm, and the convergence analysis of the TSKPFNN-AMF are described in detail. Finally, some experimental results are illustrated to show the effectiveness of the proposed control for the three-phase grid-connected PV system.
Discrete-Time Quasi-Sliding-Mode Control With Prescribed Performance Function and its Application to Piezo-Actuated Positioning Systems. In this paper, the constrained control problem of the prescribed performance control technique is discussed in discrete-time domain for single input-single output dynamical systems. The goal of this design is to maintain the tracking error trajectory in a predefined convergence zone described by a performance function in the presence of the uncertainties. In order to achieve this goal, the discret...
Sliding mode control for singularly perturbed Markov jump descriptor systems with nonlinear perturbation This paper develops a stochastic integral sliding mode control strategy for singularly perturbed Markov jump descriptor systems subject to nonlinear perturbation. The transition probabilities (TPs) for the system modes are considered to switch randomly within a finite set. We first present a novel mode and switch-dependent integral switching surface, based upon which the resulting sliding mode dynamics (SMD) only suffers from the unmatched perturbation that is not amplified in the Euclidean norm sense. To overcome the difficulty of synthesizing the nominal controller, we rewrite the SMD into the equivalent descriptor form. By virtue of the fixed-point principle and stochastic system theory, we give a rigorous proof for the existence and uniqueness of the solution and the mean-square exponential admissibility for the transformed SMD. A generalized framework that covers arbitrary switching and Markov switching of the TPs as special cases is further achieved. Then, by analyzing the stochastic reachability of the sliding motion, we synthesize a mode and switch-dependent SMC law. The adaptive technique is further integrated to estimate the unavailable boundaries of the matched perturbation. Finally, simulation results on an electronic circuit system confirm the validity and benefits of the developed control strategy.
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
Design Techniques for Fully Integrated Switched-Capacitor DC-DC Converters. This paper describes design techniques to maximize the efficiency and power density of fully integrated switched-capacitor (SC) DC-DC converters. Circuit design methods are proposed to enable simplified gate drivers while supporting multiple topologies (and hence output voltages). These methods are verified by a proof-of-concept converter prototype implemented in 0.374 mm2 of a 32 nm SOI process. ...
Distributed reset A reset subsystem is designed that can be embedded in an arbitrary distributed system in order to allow the system processes to reset the system when necessary. Our design is layered, and comprises three main components: a leader election, a spanning tree construction, and a diffusing computation. Each of these components is self-stabilizing in the following sense: if the coordination between the up-processes in the system is ever lost (due to failures or repairs of processes and channels), then each component eventually reaches a state where coordination is regained. This capability makes our reset subsystem very robust: it can tolerate fail-stop failures and repairs of processes and channels, even when a reset is in progress
Distributed multi-agent optimization with state-dependent communication We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. We study a projected multi-agent subgradient algorithm under state-dependent communication. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a “disagreement metric” between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence.
Yet another MicroArchitectural Attack:: exploiting I-Cache MicroArchitectural Attacks (MA), which can be considered as a special form of Side-Channel Analysis, exploit microarchitectural functionalities of processor implementations and can compromise the security of computational environments even in the presence of sophisticated protection mechanisms like virtualization and sandboxing. This newly evolving research area has attracted significant interest due to the broad application range and the potentials of these attacks. Cache Analysis and Branch Prediction Analysis were the only types of MA that had been known publicly. In this paper, we introduce Instruction Cache (I-Cache) as yet another source of MA and present our experimental results which clearly prove the practicality and danger of I-Cache Attacks.
A decentralized modular control framework for robust control of FES-activated walker-assisted paraplegic walking using terminal sliding mode and fuzzy logic control. A major challenge to developing functional electrical stimulation (FES) systems for paraplegic walking and widespread acceptance of these systems is the design of a robust control strategy that provides satisfactory tracking performance. The systems need to be robust against time-varying properties of neuromusculoskeletal dynamics, day-to-day variations, subject-to-subject variations, external dis...
PUMP: a programmable unit for metadata processing We introduce the Programmable Unit for Metadata Processing (PUMP), a novel software-hardware element that allows flexible computation with uninterpreted metadata alongside the main computation with modest impact on runtime performance (typically 10--40% for single policies, compared to metadata-free computation on 28 SPEC CPU2006 C, C++, and Fortran programs). While a host of prior work has illustrated the value of ad hoc metadata processing for specific policies, we introduce an architectural model for extensible, programmable metadata processing that can handle arbitrary metadata and arbitrary sets of software-defined rules in the spirit of the time-honored 0-1-∞ rule. Our results show that we can match or exceed the performance of dedicated hardware solutions that use metadata to enforce a single policy, while adding the ability to enforce multiple policies simultaneously and achieving flexibility comparable to software solutions for metadata processing. We demonstrate the PUMP by using it to support four diverse safety and security policies---spatial and temporal memory safety, code and data taint tracking, control-flow integrity including return-oriented-programming protection, and instruction/data separation---and quantify the performance they achieve, both singly and in combination.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
FReaC Cache: Folded-logic Reconfigurable Computing in the Last Level Cache The need for higher energy efficiency has resulted in the proliferation of accelerators across platforms, with custom and reconfigurable accelerators adopted in both edge devices and cloud servers. However, existing solutions fall short in providing accelerators with low-latency, high-bandwidth access to the working set and suffer from the high latency and energy cost of data transfers. Such costs can severely limit the smallest granularity of the tasks that can be accelerated and thus the applicability of the accelerators. In this work, we present FReaC Cache, a novel architecture that natively supports reconfigurable computing in the last level cache (LLC), thereby giving energy-efficient accelerators low-latency, high-bandwidth access to the working set. By leveraging the cache's existing dense memory arrays, buses, and logic folding, we construct a reconfigurable fabric in the LLC with minimal changes to the system, processor, cache, and memory architecture. FReaC Cache is a low-latency, low-cost, and low-power alternative to off-die/offchip accelerators, and a flexible, and low-cost alternative to fixed function accelerators. We demonstrate an average speedup of 3X and Perf/W improvements of 6.1X over an edge-class multi-core CPU, and add 3.5% to 15.3% area overhead per cache slice.
Architecture Aware Partitioning Algorithms Existing partitioning algorithms provide limited support for load balancing simulations that are performed on heterogeneous parallel computing platforms. On such architectures, effec- tive load balancing can only be achieved if the graph is distributed so that it properly takes into account the available resources (CPU speed, network bandwidth). With heterogeneous tech- nologies becoming more popular, the need for suitable graph partitioning algorithms is criti- cal. We developed such algorithms that can address the partitioning requirements of scientific computations, and can correctly model the architectural characteristics of emerging hardware platforms.
AMD Fusion APU: Llano The Llano variant of the AMD Fusion accelerated processor unit (APU) deploys AMD Turbo CORE technology to maximize processor performance within the system's thermal design limits. Low-power design and performance/watt ratio optimization were key design approaches, and power gating is implemented pervasively across the APU.
Decoupling Data Supply from Computation for Latency-Tolerant Communication in Heterogeneous Architectures. In today’s computers, heterogeneous processing is used to meet performance targets at manageable power. In adopting increased compute specialization, however, the relative amount of time spent on communication increases. System and software optimizations for communication often come at the costs of increased complexity and reduced portability. The Decoupled Supply-Compute (DeSC) approach offers a way to attack communication latency bottlenecks automatically, while maintaining good portability and low complexity. Our work expands prior Decoupled Access Execute techniques with hardware/software specialization. For a range of workloads, DeSC offers roughly 2 × speedup, and additional specialized compression optimizations reduce traffic between decoupled units by 40%.
Stream Floating: Enabling Proactive and Decentralized Cache Optimizations As multicore systems continue to grow in scale and on-chip memory capacity, the on-chip network bandwidth and latency become problematic bottlenecks. Because of this, overheads in data transfer, the coherence protocol and replacement policies become increasingly important. Unfortunately, even in well-structured programs, many natural optimizations are difficult to implement because of the reactive...
Decentralized Offload-based Execution on Memory-centric Compute Cores.
QsCores: trading dark silicon for scalable energy efficiency with quasi-specific cores Transistor density continues to increase exponentially, but power dissipation per transistor is improving only slightly with each generation of Moore's law. Given the constant chip-level power budgets, this exponentially decreases the percentage of transistors that can switch at full frequency with each technology generation. Hence, while the transistor budget continues to increase exponentially, the power budget has become the dominant limiting factor in processor design. In this regime, utilizing transistors to design specialized cores that optimize energy-per-computation becomes an effective approach to improve system performance. To trade transistors for energy efficiency in a scalable manner, we propose Quasi-specific Cores, or QsCores, specialized processors capable of executing multiple general-purpose computations while providing an order of magnitude more energy efficiency than a general-purpose processor. The QsCores design flow is based on the insight that similar code patterns exist within and across applications. Our approach exploits these similar code patterns to ensure that a small set of specialized cores support a large number of commonly used computations. We evaluate QsCores's ability to target both a single application library (e.g., data structures) as well as a diverse workload consisting of applications selected from different domains (e.g., SPECINT, EEMBC, and Vision). Our results show that QsCores can provide 18.4 x better energy efficiency than general-purpose processors while reducing the amount of specialized logic required to support the workload by up to 66%.
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Network-based robust H∞ control of systems with uncertainty This paper is concerned with the design of robust H"~ controllers for uncertain networked control systems (NCSs) with the effects of both the network-induced delay and data dropout taken into consideration. A new analysis method for H"~ performance of NCSs is provided by introducing some slack matrix variables and employing the information of the lower bound of the network-induced delay. The designed H"~ controller is of memoryless type, which can be obtained by solving a set of linear matrix inequalities. Numerical examples and simulation results are given finally to illustrate the effectiveness of the method.
Incremental Stochastic Subgradient Algorithms for Convex Optimization This paper studies the effect of stochastic errors on two constrained incremental subgradient algorithms. The incremental subgradient algorithms are viewed as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. First, the standard cyclic incremental subgradient algorithm is studied. In this, the agents form a ring structure and pass the iterate in a cycle. When there are stochastic errors in the subgradient evaluations, sufficient conditions on the moments of the stochastic errors are obtained that guarantee almost sure convergence when a diminishing step-size is used. In addition, almost sure bounds on the algorithm's performance with a constant step-size are also obtained. Next, the Markov randomized incremental subgradient method is studied. This is a noncyclic version of the incremental algorithm where the sequence of computing agents is modeled as a time nonhomogeneous Markov chain. Such a model is appropriate for mobile networks, as the network topology changes across time in these networks. Convergence results and error bounds for the Markov randomized method in the presence of stochastic errors for diminishing and constant step-sizes are obtained.
Wireless communications in the twenty-first century: a perspective Wireless communications are expected to be the dominant mode of access technology in the next century. Besides voice, a new range of services such as multimedia, high-speed data, etc. are being offered for delivery over wireless networks. Mobility will be seamless, realizing the concept of persons being in contact anywhere, at any time. Two developments are likely to have a substantial impact on t...
A 60-GHz 16QAM/8PSK/QPSK/BPSK Direct-Conversion Transceiver for IEEE802.15.3c. This paper presents a 60-GHz direct-conversion transceiver using 60-GHz quadrature oscillators. The transceiver has been fabricated in a standard 65-nm CMOS process. It in cludes a receiver with a 17.3-dB conversion gain and less than 8.0-dB noise figure, a transmitter with a 18.3-dB conversion gain, a 9.5-dBm output 1 dB compression point, a 10.9-dBm saturation output power and 8.8-% power added ...
Reduction and IR-drop compensations techniques for reliable neuromorphic computing systems Neuromorphic computing system (NCS) is a promising architecture to combat the well-known memory bottleneck in Von Neumann architecture. The recent breakthrough on memristor devices made an important step toward realizing a low-power, small-footprint NCS on-a-chip. However, the currently low manufacturing reliability of nano-devices and the voltage IR-drop along metal wires and memristors arrays severely limits the scale of memristor crossbar based NCS and hinders the design scalability. In this work, we propose a novel system reduction scheme that significantly lowers the required dimension of the memristor crossbars in NCS while maintaining high computing accuracy. An IR-drop compensation technique is also proposed to overcome the adverse impacts of the wire resistance and the sneak-path problem in large memristor crossbar designs. Our simulation results show that the proposed techniques can improve computing accuracy by 27.0% and 38.7% less circuit area compared to the original NCS design.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.2
0.2
0.2
0.2
0.2
0.1
0.02
0
0
0
0
0
0
0
Signal receiving and processing platform of the experimental passive radar for intelligent surveillance system using software defined radio approach This document presents a signal receiving and processing platform for an experimental FM radio based multistatic passive radar utilizing Software Defined Radio. This radar was designed as a part of the intelligent surveillance system. Our platform consists of a reconfigurable multi-sensor antenna, radio frequency (RF) front-end hardware and personal computer host executing modified GNU Radio code. As the RF hardware (receiver and downconverter) the Universal Software Radio Peripherals were used. We present and discuss different approaches to construct the multichannel receiver and signal processing platform for passive radar utilizing USRP devices and GNU Radio. Received signals after downconverting on the FPGA of the USRP are transmitted to the PC host where second stage of data processing takes place: namely digital beamforming. Digital beamforming algorithm estimates echo signals reflected from flying target. After estimating echo signals Range-Doppler surfaces can be computed in order to estimate the target position.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Clock-Phase Reuse Technique for Discrete-Time Bandpass Filters In this article, we apply a new clock-phase reuse technique to a discrete-time infinite impulse response (IIR) complex-signaling bandpass filter (BPF). This leads to a deep improvement in filtering, especially the stopband rejection, while maintaining the area, sampling frequency, and the number of clock phases and their pulsewidths. Fabricated in 28-nm CMOS, the proposed BPF is highly tuneable an...
A Charge-Rotating IIR Filter with Linear Interpolation and High Stop-Band Rejection This paper introduces a new architecture of a discrete-time charge-rotating low-pass filter (LPF) which achieves a high-order of filtering and improves its stop-band rejection while maintaining a reasonable duty cycle of the main clock at 20%. Its key innovation is a linear interpolation within the charge-accumulation operation. Fabricated in 28-nm CMOS, the proposed IIR LPF demonstrates a 1-9.9MH...
An 8-bit 100-mhz cmos linear interpolation dac An 8-bit 100-MHz CMOS linear interpolation digital-to-analog converter (DAC) is presented. It applies a time-interleaved structure on an 8-bit binary-weighted DAC, using 16 evenly skewed clocks generated by a voltage-controlled delay line to realize the linear interpolation function. The linear interpolation increases the attenuation of the DAC&#39;s image components. The requirement for the analog re...
A Quadrature Charge-Domain Sampling Mixer With Embedded FIR, IIR, and N-Path Filters This paper presents the analysis and design of a quadrature charge-domain down-conversion sampling mixer with embedded finite-impulse-response (FIR), infinite-impulse-response (IIR), and 4-path bandpass filters. An in-depth investigation of the principles of periodic impulse sampling, periodic windowed sampling, and periodic N-path windowed sampling is presented and their characteristics are compared. A detailed mathematical treatment of charge-domain windowed samplers with built-in sinc, FIR and IIR filters is provided. A quadrature charge-domain sampler with embedded FIR, IIR, and 4-path band-pass filters is proposed and its performance including the effect of the non-idealities of devices is investigated. The proposed design is implemented in an IBM 130 nm 1.2 V CMOS technology. Simulation results demonstrate that the proposed design is tunable over 50–250 MHz frequency range. For an 100 MHz input, the proposed design exhibits 70 dB aliasing rejection, 60 dB stop band attenuation while consuming current of 145 . The performance of the proposed design is further validated using the results of on-wafer measurement of the fabricated microchip.
Low-Power Highly Selective Channel Filtering Using a Transconductor–Capacitor Analog FIR Analog finite-impulse-response (AFIR) filtering is proposed to realize low-power channel selection filters for the Internet-of-Things receivers. High selectivity is achieved using an architecture based on only a single—time-varying—transconductance and integration capacitor. The transconductance is implemented as a digital-to-analog converter and is programmable by an on-chip memory. The AFIR operating principle is shown step by step, including its complete transfer function with aliasing. The filter bandwidth and transfer function are highly programmable through the transconductance coefficients and clock frequency. Moreover, the transconductance programmability allows an almost ideal filter response to be realized by careful analysis and compensation of the parasitic circuit impairments. The filter, manufactured in 22-nm FDSOI, has an active area of 0.09 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . Its bandwidth can be accurately tuned from 0.06 to 3.4 MHz. The filter consumes 92 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> from a 700-mV supply. This low power consumption is combined with a high selectivity: <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\text{f}_{-60\,\text {dB}}/\text{f}_{-3\,\text {dB}}\,\,=$ </tex-math></inline-formula> 3.8. The filter has 31.5-dB gain and 12-nV/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sqrt {\text {Hz}}$ </tex-math></inline-formula> input-referred noise for a 0.43-MHz bandwidth. The OIP3 is 28 dBm, independent of the frequency offset. The output-referred 1-dB-compression point is 3.7 dBm, and the in-band gain compresses by 1 dB for an −3.7-dBm out-of-band input signal while still providing >60 dB of filtering.
Enhanced-Selectivity High-Linearity Low-Noise Mixer-First Receiver With Complex Pole Pair Due to Capacitive Positive Feedback. A mixer-first receiver (RX) with enhanced selectivity and high dynamic range is proposed, targeting to remove surface acoustic-wave-filters in mobile phones and cover all frequency bands up to 6 GHz. Capacitive negative feedback across the baseband (BB) amplifier serves as a blocker bypassing path, while an extra capacitive positive feedback path offers further blocker rejection. This combination ...
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Time-delay systems: an overview of some recent advances and open problems After presenting some motivations for the study of time-delay system, this paper recalls modifications (models, stability, structure) arising from the presence of the delay phenomenon. A brief overview of some control approaches is then provided, the sliding mode and time-delay controls in particular. Lastly, some open problems are discussed: the constructive use of the delayed inputs, the digital implementation of distributed delays, the control via the delay, and the handling of information related to the delay value.
Bayesian Network Classifiers Recent work in supervised learning has shown that a surprisinglysimple Bayesian classifier with strong assumptions of independence amongfeatures, called naive Bayes, is competitive withstate-of-the-art classifiers such as C4.5. This fact raises the question ofwhether a classifier with less restrictive assumptions can perform evenbetter. In this paper we evaluate approaches for inducing classifiers fromdata, based on the theory of learning Bayesian networks. These networks are factored representations ofprobability distributions that generalize the naive Bayesian classifier andexplicitly represent statements about independence. Among these approacheswe single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same timemaintains the computational simplicity (no search involved) and robustnessthat characterize naive Bayes. We experimentally tested these approaches,using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for featureselection.
A theory of nonsubtractive dither A detailed mathematical investigation of multibit quantizing systems using nonsubtractive dither is presented. It is shown that by the use of dither having a suitably chosen probability density function, moments of the total error can be made independent of the system input signal but that statistical independence of the error and the input signals is not achievable. Similarly, it is demonstrated that values of the total error signal cannot generally be rendered statistically independent of one another but that their joint moments can be controlled and that, in particular, the error sequence can be rendered spectrally white. The properties of some practical dither signals are explored, and recommendations are made for dithering in audio, video, and measurement applications. The paper collects all of the important results on the subject of nonsubtractive dithering and introduces important new ones with the goal of alleviating persistent and widespread misunderstandings regarding the technique
Codejail: Application-Transparent Isolation of Libraries with Tight Program Interactions.
An Opportunistic Cognitive MAC Protocol for Coexistence with WLAN In last decades, the demand of wireless spectrum has increased rapidly with the development of mobile communication services. Recent studies recognize that traditional fixed spectrum assignment does not use spectrum efficiently. Such a wasting phenomenon could be amended after the present of cognitive radio. Cognitive radio is a new type of technology that enables secondary usage to unlicensed user. This paper presents an opportunistic cognitive MAC protocol (OC-MAC) for cognitive radios to access unoccupied resource opportunistically and coexist with wireless local area network (WLAN). By a primary traffic predication model and transmission etiquette, OC-MAC avoids producing fatal damage to licensed users. Then a ns2 simulation model is developed to evaluate its performance in scenarios with coexisting WLAN and cognitive network.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
Neuropixels Data-Acquisition System: A Scalable Platform for Parallel Recording of 10,000+ Electrophysiological Signals. Although CMOS fabrication has enabled a quick evolution in the design of high-density neural probes and neural-recording chips, the scaling and miniaturization of the complete data-acquisition systems has happened at a slower pace. This is mainly due to the complexity and the many requirements that change depending on the specific experimental settings. In essence, the fundamental challenge of a n...
1.1
0.1
0.1
0.1
0.1
0.025
0
0
0
0
0
0
0
0
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card