Query Text
stringlengths
10
59.9k
Ranking 1
stringlengths
10
4.53k
Ranking 2
stringlengths
10
50.9k
Ranking 3
stringlengths
10
6.78k
Ranking 4
stringlengths
10
59.9k
Ranking 5
stringlengths
10
6.78k
Ranking 6
stringlengths
10
59.9k
Ranking 7
stringlengths
10
59.9k
Ranking 8
stringlengths
10
6.78k
Ranking 9
stringlengths
10
59.9k
Ranking 10
stringlengths
10
50.9k
Ranking 11
stringlengths
13
6.78k
Ranking 12
stringlengths
14
50.9k
Ranking 13
stringlengths
24
2.74k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.24
score_4
float64
0
0.24
score_5
float64
0
0.24
score_6
float64
0
0.21
score_7
float64
0
0.07
score_8
float64
0
0.03
score_9
float64
0
0.01
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Quadrant of euphoria: a crowdsourcing platform for QoE assessment Existing quality of experience assessment methods, subjective or objective, suffer from either or both problems of inaccurate experiment tools and expensive personnel cost. The panacea for them, as we have come to realize, lies in the joint application of paired comparison and crowdsourcing, the latter being a Web 2.0 practice of organizations asking ordinary unspecific Internet users to carry out internal tasks. We present in this article Quadrant of Euphoria, a user-friendly Web-based platform facilitating QoE assessments in network and multimedia studies, which features low cost, participant diversity, meaningful and interpretable QoE scores, subject consistency assurance, and a burdenless experiment process.
Queuing based optimal scheduling mechanism for QoE provisioning in cognitive radio relaying network In cognitive radio network (CRN), secondary users (SU) can share the licensed spectrum with the primary users (PU). Compared with the traditional network, spectrum utilization in CRN will be greatly improved. In order to ensure the performance of SUs as well as PU, wireless relaying can be employed to improve the system capacity. Meanwhile, quality-of-experience (QoE) should be considered and provisioned in the relay scheduling scheme to ensure user experience and comprehensive network performance. In this paper, we studied a QoE provisioning mechanism for a queuing based optimal relay scheduling problem in CRN. We designed a QoE provisioning scheme with multiple optimized goals about higher capacity and lower packet loss probability. The simulation results showed that our mechanism could get a much better performance on packet loss with suboptimum system capacity. And it indicated that our mechanism could guarantee a better user experience through the specific QoS-QoE mapping models. So our mechanism can improve the network performance and user experience comprehensively.
Mobile quality of experience: Recent advances and challenges Quality of Experience (QoE) is important from both a user perspective, since it assesses the quality a user actually experiences, and a network perspective, since it is important for a provider to dimension its network to support the necessary QoE. This paper presents some recent advances on the modeling and measurement of QoE with an emphasis on mobile networks. It also identifies key challenges for mobile QoE.
Personalized user engagement modeling for mobile videos. The ever-increasing mobile video services and users’ demand for better video quality have boosted research into the video Quality-of-Experience. Recently, the concept of Quality-of-Experience has evolved to Quality-of-Engagement, a more actionable metric to evaluate users’ engagement to the video services and directly relate to the service providers’ revenue model. Existing works on user engagement mostly adopt uniform models to quantify the engagement level of all users, overlooking the essential distinction of individual users. In this paper, we first conduct a large-scale measurement study on a real-world data set to demonstrate the dramatic discrepancy in user engagement, which implies that a uniform model is not expressive enough to characterize the distinctive engagement pattern of each user. To address this problem, we propose PE, a personalized user engagement model for mobile videos, which, for the first time, addresses the user diversity in the engagement modeling. Evaluation results on a real-world data set show that our system significantly outperforms the uniform engagement models, with a 19.14% performance gain.
QoE-based transport optimization for video delivery over next generation cellular networks Video streaming is considered as one of the most important and challenging applications for next generation cellular networks. Current infrastructures are not prepared to deal with the increasing amount of video traffic. The current Internet, and in particular the mobile Internet, was not designed with video requirements in mind and, as a consequence, its architecture is very inefficient for handling video traffic. Enhancements are needed to cater for improved Quality of Experience (QoE) and improved reliability in a mobile network. In this paper we design a novel dynamic transport architecture for next generation mobile networks adapted to video service requirements. Its main novelty is the transport optimization of video delivery that is achieved through a QoE oriented redesign of networking mechanisms as well as the integration of Content Delivery Networks (CDN) techniques.
Guest Editorial QoE-Aware Wireless Multimedia Systems. The 11 papers in this special issue cover a range of topics and can be logically organized in three groups, focusing on QoE-aware media protection, QoE assessment and modelling, and multi-user-QoE management.
The user in experimental computer systems research Experimental computer systems research typically ignores the end-user, modeling him, if at all, in overly simple ways. We argue that this (1) results in inadequate performance evaluation of the systems, and (2) ignores opportunities. We summarize our experiences with (a) directly evaluating user satisfaction and (b) incorporating user feedback in different areas of client/server computing, and use our experiences to motivate principles for that domain. Specifically, we report on user studies to measure user satisfaction with resource borrowing and with different clock frequencies in desktop computing, the development and evaluation of user interfaces to integrate user feedback into scheduling and clock frequency decisions in this context, and results in predicting user action and system response in a remote display system. We also present initial results on extending our work to user control of scheduling and mapping of virtual machines in a virtualization-based distributed computing environment. We then generalize (a) and (b) as recommendations for incorporating the user into experimental computer systems research.
Quality of experience management in mobile cellular networks: key issues and design challenges. Telecom operators have recently faced the need for a radical shift from technical quality requirements to customer experience guarantees. This trend has emerged due to the constantly increasing amount of mobile devices and applications and the explosion of overall traffic demand, forming a new era: “the rise of the consumer”. New terms have been coined in order to quantify, manage, and improve the...
Impact Of Mobile Devices And Usage Location On Perceived Multimedia Quality We explore the quality impact when audiovisual content is delivered to different mobile devices. Subjects were shown the same sequences on five different mobile devices and a broadcast quality television. Factors influencing quality ratings include video resolution, viewing distance, and monitor size. Analysis shows how subjects' perception of multimedia quality differs when content is viewed on different mobile devices. In addition, quality ratings from laboratory and simulated living room sessions were statistically equivalent.
MIMO technologies in 3GPP LTE and LTE-advanced 3rd Generation Partnership Project (3GPP) has recently completed the specification of the Long Term Evolution (LTE) standard. Majority of the world's operators and vendors are already committed to LTE deployments and developments, making LTE the market leader in the upcoming evolution to 4G wireless communication systems. Multiple input multiple output (MIMO) technologies introduced in LTE such as spatial multiplexing, transmit diversity, and beamforming are key components for providing higher peak rate at a better system efficiency, which are essential for supporting future broadband data service over wireless links. Further extension of LTE MIMO technologies is being studied under the 3GPP study item "LTE-Advanced" to meet the requirement of IMT-Advanced set by International Telecommunication Union Radiocommunication Sector (ITU-R). In this paper, we introduce various MIMO technologies employed in LTE and provide a brief overview on the MIMO technologies currently discussed in the LTE-Advanced forum.
The price of privacy and the limits of LP decoding This work is at theintersection of two lines of research. One line, initiated by Dinurand Nissim, investigates the price, in accuracy, of protecting privacy in a statistical database. The second, growing from an extensive literature on compressed sensing (see in particular the work of Donoho and collaborators [4,7,13,11])and explicitly connected to error-correcting codes by Candès and Tao ([4]; see also [5,3]), is in the use of linearprogramming for error correction. Our principal result is the discovery of a sharp threshhold ρ*∠ 0.239, so that if ρ A is a random m x n encoding matrix of independently chosen standardGaussians, where m = O(n), then with overwhelming probability overchoice of A, for all x ∈ Rn, LP decoding corrects ⌊ ρ m⌋ arbitrary errors in the encoding Ax, while decoding can be made to fail if the error rate exceeds ρ*. Our boundresolves an open question of Candès, Rudelson, Tao, and Vershyin [3] and (oddly, but explicably) refutesempirical conclusions of Donoho [11] and Candès et al [3]. By scaling and rounding we can easilytransform these results to obtain polynomial-time decodable random linear codes with polynomial-sized alphabets tolerating any ρ In the context of privacy-preserving datamining our results say thatany privacy mechanism, interactive or non-interactive, providingreasonably accurate answers to a 0.761 fraction of randomly generated weighted subset sum queries, and arbitrary answers on the remaining 0.239 fraction, is blatantly non-private.
Parameterized interconnect order reduction with explicit-and-implicit multi-parameter moment matching for inter/intra-die variations In this paper we propose a novel parameterized interconnect order reduction algorithm, CORE, to efficiently capture both inter-die and intra-die variations. CORE applies a two-step explicit-and-implicit scheme for multiparameter moment matching. As such, CORE can match significantly more moments than other traditional techniques using the same model size. In addition, a recursive Arnoldi algorithm is proposed to quickly construct the Krylov subspace that is required for parameterized order reduction. Applying the recursive Arnoldi algorithm significantly reduces the computation cost for model generation. Several RC and RLC interconnect examples demonstrate that CORE can provide up to 10× better modeling accuracy than other traditional techniques, while achieving smaller model complexity (i.e. size). It follows that these interconnect models generated by CORE can provide more accurate simulation result with cheaper simulation cost, when they are utilized for gate-interconnect co-simulation.
Properties of Interval-Valued Fuzzy Relations, Atanassov's Operators and Decomposable Operations In this paper we study properties of interval-valued fuzzy relations which were introduced by L.A. Zadeh in 1975. Fuzzy set theory turned out to be a useful tool to describe situations in which the data are imprecise or vague. Interval-valued fuzzy set theory is a generalization of fuzzy set theory which was introduced also by Zadeh in 1965. We examine some properties of interval-valued fuzzy relations in the context of Atanassov's operators and decomposable operations in interval-valued fuzzy set theory.
Total variation minimization with separable sensing operator Compressed Imaging is the theory that studies the problem of image recovery from an under-determined system of linear measurements. One of the most popular methods in this field is Total Variation (TV) Minimization, known for accuracy and computational efficiency. This paper applies a recently developed Separable Sensing Operator approach to TV Minimization, using the Split Bregman framework as the optimization approach. The internal cycle of the algorithm is performed by efficiently solving coupled Sylvester equations rather than by an iterative optimization procedure as it is done conventionally. Such an approach requires less computer memory and computational time than any other algorithm published to date. Numerical simulations show the improved -- by an order of magnitude or more -- time vs. image quality compared to two conventional algorithms.
1.018056
0.020435
0.020435
0.020435
0.017354
0.011802
0.008505
0.002684
0.000186
0.000007
0
0
0
0
Hedges: A study in meaning criteria and the logic of fuzzy concepts
The Vienna Definition Language
General formulation of formal grammars By extracting the basic properties common to the formal grammars appeared in existing literatures, we develop a general formulation of formal grammars. We define a pseudo grammar and derive from it the well-known probabilistic, fuzzy grammars and so on. Moreover, several interesting grammars such as ⊔∗ grammars, ⊔ ⊓ grammars, ⊔ ⊓ grammars, composite B-fuzzy grammars, and mixed fuzzy grammars, which have never appeared in any other papers before, are derived.
Matrix Equations and Normal Forms for Context-Free Grammars The relationship between the set of productions of a context-free grammar and the corresponding set of defining equations is first pointed out. The closure operation on a matrix of strings is defined and this concept is used to formalize the solution to a set of linear equations. A procedure is then given for rewriting a context-free grammar in Greibach normal form, where the replacements string of each production begins with a terminal symbol. An additional procedure is given for rewriting the grammar so that each replacement string both begins and ends with a terminal symbol. Neither procedure requires the evaluation of regular begins and ends with a terminal symbol. Neither procedure requires the evaluation of regular expressions over the total vocabulary of the grammar, as is required by Greibach's procedure.
A Note on Fuzzy Sets
Fuzzy modifiers based on fuzzy relations In this paper we introduce a new type of fuzzy modifiers (i.e. mappings that transform a fuzzy set into a modified fuzzy set) based on fuzzy relations. We show how they can be applied for the representation of weakening adverbs (more or less, roughly) and intensifying adverbs (very, extremely) in the inclusive and the non-inclusive interpretation. We illustrate their use in an approximate reasoning scheme.
Linguistic description of the human gait quality The human gait is a complex phenomenon that is repeated in time following an approximated pattern. Using a three-axial accelerometer fixed in the waist, we can obtain a temporal series of measures that contains a numerical description of this phenomenon. Nevertheless, even when we represent graphically these data, it is difficult to interpret them due to the complexity of the phenomenon and the huge amount of available data. This paper describes our research on designing a computational system able to generate linguistic descriptions of this type of quasi-periodic complex phenomena. We used our previous work on both, Granular Linguistic Models of Phenomena and Fuzzy Finite State Machines, to create a basic linguistic model of the human gait. We have used this model to generate a human friendly linguistic description of this phenomenon focused on the assessment of the gait quality. We include a practical application where we analyze the gait quality of healthy individuals and people with lesions in their limbs.
COR: a methodology to improve ad hoc data-driven linguistic rule learning methods by inducing cooperation among rules This paper introduces a new learning methodology to quickly generate accurate and simple linguistic fuzzy models: the cooperative rules (COR) methodology. It acts on the consequents of the fuzzy rules to find those that are best cooperating. Instead of selecting the consequent with the highest performance in each fuzzy input subspace, as ad-hoc data-driven methods usually do, the COR methodology considers the possibility of using another consequent, different from the best one, when it allows the fuzzy model to be more accurate thanks to having a rule set with the best cooperation. Our proposal has shown good results in solving three different applications when compared to other methods.
Contrast of a fuzzy relation In this paper we address a key problem in many fields: how a structured data set can be analyzed in order to take into account the neighborhood of each individual datum. We propose representing the dataset as a fuzzy relation, associating a membership degree with each element of the relation. We then introduce the concept of interval-contrast, a means of aggregating information contained in the immediate neighborhood of each element of the fuzzy relation. The interval-contrast measures the range of membership degrees present in each neighborhood. We use interval-contrasts to define the necessary properties of a contrast measure, construct several different local contrast and total contrast measures that satisfy these properties, and compare our expressions to other definitions of contrast appearing in the literature. Our theoretical results can be applied to several different fields. In an Appendix A, we apply our contrast expressions to photographic images.
Extensions of the multicriteria analysis with pairwise comparison under a fuzzy environment Multicriteria decision-making (MCDM) problems often involve a complex decision process in which multiple requirements and fuzzy conditions have to be taken into consideration simultaneously. The existing approaches for solving this problem in a fuzzy environment are complex. Combining the concepts of grey relation and pairwise comparison, a new fuzzy MCDM method is proposed. First, the fuzzy analytic hierarchy process (AHP) is used to construct fuzzy weights of all criteria. Then, linguistic terms characterized by L–R triangular fuzzy numbers are used to denote the evaluation values of all alternatives versus subjective and objective criteria. Finally, the aggregation fuzzy assessments of different alternatives are ranked to determine the best selection. Furthermore, this paper uses a numerical example of location selection to demonstrate the applicability of the proposed method. The study results show that this method is an effective means for tackling MCDM problems in a fuzzy environment.
A fuzzy MCDM method for solving marine transshipment container port selection problems “Transshipment” is a very popular and important issue in the present international trade container transportation market. In order to reduce the international trade container transportation operation cost, it is very important for shipping companies to choose the best transshipment container port. The aim of this paper is to present a new Fuzzy Multiple Criteria Decision Making Method (FMCDM) for solving the transshipment container port selection problem under fuzzy environment. In this paper we present first the canonical representation of multiplication operation on three fuzzy numbers, and then this canonical representation is applied to the selection of transshipment container port. Based on the canonical representation, the decision maker of shipping company can determine quickly the ranking order of all candidate transshipment container ports and select easily the best one.
Sparse Reconstruction by Separable Approximation Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.
A fuzzy logic system for the detection and recognition of handwritten street numbers Fuzzy logic is applied to the problem of locating and reading street numbers in digital images of handwritten mail. A fuzzy rule-based system is defined that uses uncertain information provided by image processing and neural network-based character recognition modules to generate multiple hypotheses with associated confidence values for the location of the street number in an image of a handwritten address. The results of a blind test of the resultant system are presented to demonstrate the value of this new approach. The results are compared to those obtained using a neural network trained with backpropagation. The fuzzy logic system achieved higher performance rates
A possibilistic approach to the modeling and resolution of uncertain closed-loop logistics Closed-loop logistics planning is an important tactic for the achievement of sustainable development. However, the correlation among the demand, recovery, and landfilling makes the estimation of their rates uncertain and difficult. Although the fuzzy numbers can present such kinds of overlapping phenomena, the conventional method of defuzzification using level-cut methods could result in the loss of information. To retain complete information, the possibilistic approach is adopted to obtain the possibilistic mean and mean square imprecision index (MSII) of the shortage and surplus for uncertain factors. By applying the possibilistic approach, a multi-objective, closed-loop logistics model considering shortage and surplus is formulated. The two objectives are to reduce both the total cost and the root MSII. Then, a non-dominated solution can be obtained to support decisions with lower perturbation and cost. Also, the information on prediction interval can be obtained from the possibilistic mean and root MSII to support the decisions in the uncertain environment. This problem is non-deterministic polynomial-time hard, so a new algorithm based on the spanning tree-based genetic algorithm has been developed. Numerical experiments have shown that the proposed algorithm can yield comparatively efficient and accurate results.
1.016606
0.025015
0.025015
0.025015
0.010011
0.002848
0.000502
0.000028
0.000006
0.000003
0.000002
0
0
0
Linguistic Decision-Making Models Using linguistic values to assess results and information about external factors is quite usual in real decision situations. In this article we present a general model for such problems. Utilities are evaluated in a term set of labels and the information is supposed to be a linguistic evidence, that is, is to be represented by a basic assignment of probability (in the sense of Dempster-Shafer) but taking its values on a term set of linguistic likelihoods. Basic decision rules, based on fuzzy risk intervals, are developed and illustrated by several examples. The last section is devoted to analyzing the suitability of considering a hierarchical structure (represented by a tree) for the set of utility labels.
Multi-criteria analysis for a maintenance management problem in an engine factory: rational choice The industrial organization needs to develop better methods for evaluating the performance of its projects. We are interested in the problems related to pieces with differing degrees of dirt. In this direction, we propose and evaluate a maintenance decision problem of maintenance in an engine factory that is specialized in the production, sale and maintenance of medium and slow speed four stroke engines. The main purpose of this paper is to study the problem by means of the analytic hierarchy process to obtain the weights of criteria, and the TOPSIS method as multicriteria decision making to obtain the ranking of alternatives, when the information was given in linguistic terms.
Evaluating Government Websites Based On A Fuzzy Multiple Criteria Decision-Making Approach This paper presents a framework of website quality evaluation for measuring the performance of government websites. Multiple criteria decision-making (MCDM) is a widely used tool for evaluating and ranking problems containing multiple, usually conflicting criteria. In line with the multi-dimensional characteristics of website quality, MCDM provides an effective framework for an inter-websites comparison involving the evaluation of multiple attributes. It thus ranks different websites compared in terms of their overall performance. This paper models the inter-website comparison problem as an MCDM problem, and presents a practical and selective approach to deal with it. In addition, fuzzy logic is applied to the subjectivity and vagueness in the assessment process. The proposed framework is effectively illustrated to rate Turkish government websites.
Group decision making with linguistic preference relations with application to supplier selection Linguistic preference relation is a useful tool for expressing preferences of decision makers in group decision making according to linguistic scales. But in the real decision problems, there usually exist interactive phenomena among the preference of decision makers, which makes it difficult to aggregate preference information by conventional additive aggregation operators. Thus, to approximate the human subjective preference evaluation process, it would be more suitable to apply non-additive measures tool without assuming additivity and independence. In this paper, based on @l-fuzzy measure, we consider dependence among subjective preference of decision makers to develop some new linguistic aggregation operators such as linguistic ordered geometric averaging operator and extended linguistic Choquet integral operator to aggregate the multiplicative linguistic preference relations and additive linguistic preference relations, respectively. Further, the procedure and algorithm of group decision making based on these new linguistic aggregation operators and linguistic preference relations are given. Finally, a supplier selection example is provided to illustrate the developed approaches.
A hybrid multi-criteria decision-making model for firms competence evaluation In this paper, we present a hybrid multi-criteria decision-making (MCDM) model to evaluate the competence of the firms. According to the competence-based theory reveals that firm competencies are recognized from exclusive and unique capabilities that each firm enjoy in marketplace and are tightly intertwined within different business functions throughout the company. Therefore, competence in the firm is a composite of various attributes. Among them many intangible and tangible attributes are difficult to measure. In order to overcome the issue, we invite fuzzy set theory into the measurement of performance. In this paper first we calculate the weight of each criterion through adaptive analytic hierarchy process (AHP) approach (A^3) method, and then we appraise the performance of firms via linguistic variables which are expressed as trapezoidal fuzzy numbers. In the next step we transform these fuzzy numbers into interval data by means of @a-cut. Then considering different values for @a we rank the firms through TOPSIS method with interval data. Since there are different ranks for different @a values, we apply linear assignment method to obtain final rank for alternatives.
Fuzzy relational algebra for possibility-distribution-fuzzy-relational model of fuzzy data In the real world, there exist a lot of fuzzy data which cannot or need not be precisely defined. We distinguish two types of fuzziness: one in an attribute value itself and the other in an association of them. For such fuzzy data, we propose a possibility-distribution-fuzzy-relational model, in which fuzzy data are represented by fuzzy relations whose grades of membership and attribute values are possibility distributions. In this model, the former fuzziness is represented by a possibility distribution and the latter by a grade of membership. Relational algebra for the ordinary relational database as defined by Codd includes the traditional set operations and the special relational operations. These operations are classified into the primitive operations, namely, union, difference, extended Cartesian product, selection and projection, and the additional operations, namely, intersection, join, and division. We define the relational algebra for the possibility-distribution-fuzzy-relational model of fuzzy databases.
Approaches to manage hesitant fuzzy linguistic information based on the cosine distance and similarity measures for HFLTSs and their application in qualitative decision making We introduce a family of novel distance and similarity measures for HFLTSs.We develop a cosine-distance-based HFL-TOPSIS method.We develop a cosine-distance-based HFL-VIKOR method.We use a numerical example to illustrate the proposed methods. Qualitative and hesitant information is common in practical decision making process. In such complicated decision making problem, it is flexible for experts to use comparative linguistic expressions to express their opinions since the linguistic expressions are much closer than single or simple linguistic term to human way of thinking and cognition. The hesitant fuzzy linguistic term set (HFLTS) turns out to be a powerful tool in representing and eliciting the comparative linguistic expressions. In order to develop some approaches to decision making with hesitant fuzzy linguistic information, in this paper, we firstly introduce a family of novel distance and similarity measures for HFLTSs, such as the cosine distance and similarity measures, the weighted cosine distance and similarity measures, the order weighted cosine distance and similarity measures, and the continuous cosine distance and similarity measures. All these distance and similarity measures are proposed from the geometric point of view while the existing distance and similarity measures over HFLTSs are based on the different forms of algebra distance measures. Afterwards, based on the hesitant fuzzy linguistic cosine distance measures between hesitant fuzzy linguistic elements, the cosine-distance-based HFL-TOPSIS method and the cosine-distance-based HFL-VIKOR method are developed to dealing with hesitant fuzzy linguistic multiple criteria decision making problems. The step by step algorithms of these two methods are given for the convenience of applications. Finally, a numerical example concerning the selection of ERP systems is given to illustrate the validation and efficiency of the proposed methods.
Linguistic modeling by hierarchical systems of linguistic rules In this paper, we propose an approach to design linguistic models which are accurate to a high degree and may be suitably interpreted. This approach is based on the development of a hierarchical system of linguistic rules learning methodology. This methodology has been thought as a refinement of simple linguistic models which, preserving their descriptive power, introduces small changes to increase their accuracy. To do so, we extend the structure of the knowledge base of fuzzy rule base systems in a hierarchical way, in order to make it more flexible. This flexibilization will allow us to have linguistic rules defined over linguistic partitions with different granularity levels, and thus to improve the modeling of those problem subspaces where the former models have bad performance
A satisfactory-oriented approach to multiexpert decision-making with linguistic assessments. This paper proposes a multiexpert decision-making (MEDM) method with linguistic assessments, making use of the notion of random preferences and a so-called satisfactory principle. It is well known that decision-making problems that manage preferences from different experts follow a common resolution scheme composed of two phases: an aggregation phase that combines the individual preferences to obtain a collective preference value for each alternative; and an exploitation phase that orders the collective preferences according to a given criterion, to select the best alternative/s. For our method, instead of using an aggregation operator to obtain a collective preference value, a random preference is defined for each alternative in the aggregation phase. Then, based on a satisfactory principle defined in this paper, that says that it is perfectly satisfactory to select an alternative as the best if its performance is as at least "good" as all the others under the same evaluation scheme, we propose a linguistic choice function to establish a rank ordering among the alternatives. Moreover, we also discuss how this linguistic decision rule can be applied to the MEDM problem in multigranular linguistic contexts. Two application examples taken from the literature are used to illuminate the proposed techniques.
A causal and effect decision making model of service quality expectation using grey-fuzzy DEMATEL approach This research uses a solution based on a combined grey-fuzzy DEMATEL method to deal with the objective of the study. This study is aimed to present a perception approach to deal with real estate agent service quality expectation ranking with uncertainty. The ranking of best top five real estate agents might be a key strategic direction of other real estate agents prior to service quality expectation. The solving procedure is as follows: (i) the weights of criteria and alternatives are described in triangular fuzzy numbers; (ii) a grey possibility degree is used to result the ranking order for all alternatives; (iii) DEMATEL is used to resolve interdependency relationships among the criteria and (iv) an empirical example of real estate agent service quality ranking problem in customer expectation is used to resolve with this proposed method approach indicating that real estate agent R"1 (CY real estate agent) is the best selection in terms of service quality in customer expectation.
Automorphisms Of The Algebra Of Fuzzy Truth Values This paper is an investigation of the automorphisms of the algebra of truth values of type-2 fuzzy sets. This algebra contains isomorphic copies of the truth value algebras of type-1 and of iriterval-valued fuzzy sets. It is shown that these subalgebras are characteristic; that is, are carried onto themselves by automorphisms of the containing algebra of truth values of fuzzy sets. Some other relevant subalgebras are proved characteristic, including the subalgebra of convex normal functions. The principal tool in this study is the determination of various irreducible elements.
Compressive sensing for sparsely excited speech signals Compressive sensing (CS) has been proposed for signals with sparsity in a linear transform domain. We explore a signal dependent unknown linear transform, namely the impulse response matrix operating on a sparse excitation, as in the linear model of speech production, for recovering compressive sensed speech. Since the linear transform is signal dependent and unknown, unlike the standard CS formulation, a codebook of transfer functions is proposed in a matching pursuit (MP) framework for CS recovery. It is found that MP is efficient and effective to recover CS encoded speech as well as jointly estimate the linear model. Moderate number of CS measurements and low order sparsity estimate will result in MP converge to the same linear transform as direct VQ of the LP vector derived from the original signal. There is also high positive correlation between signal domain approximation and CS measurement domain approximation for a large variety of speech spectra.
Handling Fuzziness In Temporal Databases This paper proposes a new data model, called FuzzTime, which is capable of handling both aspects of fuzziness and time of data. These two features can always be encountered simultaneously in many applications. This work is aimed to be a conceptual framework for advanced applications of database systems. Our approach has extended the concept of the relational data model to have such a capability. The notion of linguistic variables, fuzzy set theory and possibility theory have been employed in handling the fuzziness aspect, and the discrete time model has been assumed. Some important time-related operators to be used in a temporal query evaluation with an existence of fuzziness are also discussed.
Generating realistic stimuli for accurate power grid analysis Power analysis tools are an integral component of any current power sign-off methodology. The performance of a design's power grid affects the timing and functionality of a circuit, directly impacting the overall performance. Ensuring power grid robustness implies taking into account, among others, static and dynamic effects of voltage drop, ground bounce, and electromigration. This type of verification is usually done by simulation, targeting a worst-case scenario where devices, switching almost simultaneously, could impose stern current demands on the power grid. While determination of the exact worst-case switching conditions from the grid perspective is usually not practical, the choice of simulation stimuli has a critical effect on the results of the analysis. Targetting safe but unrealistic settings could lead to pessimistic results and costly overdesigns in terms of die area. In this article we describe a software tool that generates a reasonable, realistic, set of stimuli for simulation. The approach proposed accounts for timing and spatial restrictions that arise from the circuit's netlist and placement and generates an approximation to the worst-case condition. The resulting stimuli indicate that only a fraction of the gates change in any given timing window, leading to a more robust verification methodology, especially in the dynamic case. Generating such stimuli is akin to performing a standard static timing analysis, so the tool fits well within conventional design frameworks. Furthermore, the tool can be used for hotspot detection in early design stages.
1.007657
0.006847
0.006847
0.002334
0.001764
0.001351
0.000678
0.00036
0.000121
0.000043
0.000005
0
0
0
View Scalable Multiview Video Coding Using 3-D Warping With Depth Map Multiview video coding demands high compression rates as well as view scalability, which enables the video to be displayed on a multitude of different terminals. In order to achieve view scalability, it is necessary to limit the inter-view prediction structure. In this paper, we propose a new multiview video coding scheme that can improve the compression efficiency under such a limited inter-view prediction structure. All views are divided into two groups in the proposed scheme: base view and enhancement views. The proposed scheme first estimates a view-dependent geometry of the base view. It then uses a video encoder to encode the video of base view. The view-dependent geometry is also encoded by the video encoder. The scheme then generates prediction images of enhancement views from the decoded video and the view-dependent geometry by using image-based rendering techniques, and it makes residual signals for each enhancement view. Finally, it encodes residual signals by the conventional video encoder as if they were regular video signals. We implement one encoder that employs this scheme by using a depth map as the view-dependent geometry and 3-D warping as the view generation method. In order to increase the coding efficiency, we adopt the following three modifications: (1) object-based interpolation on 3-D warping; (2) depth estimation with consideration of rate-distortion costs; and (3) quarter-pel accuracy depth representation. Experiments show that the proposed scheme offers about 30% higher compression efficiency than the conventional scheme, even though one depth map video is added to the original multiview video.
Shape-adaptivewavelet encoding of depth maps We present a novel depth-map codec aimed at free-viewpoint 3DTV. The proposed codec relies on a shape-adaptive wavelet transform and an explicit representation of the locations of major depth edges. Unlike classical wavelet transforms, the shape-adaptive transform generates small wavelet coefficients along depth edges, which greatly reduces the data entropy. The wavelet transform is implemented by shape-adaptive lifting, which enables fast computations and perfect reconstruction. We also develop a novel rate-constrained edge detection algorithm, which integrates the idea of significance bitplanes into the Canny edge detector. Along with a simple chain code, it provides an efficient way to extract and encode edges. Experimental results on synthetic and real data confirm the effectiveness of the proposed algorithm, with PSNR gains of 5 dB and more over the Middlebury dataset.
A new methodology to derive objective quality assessment metrics for scalable multiview 3D video coding With the growing demand for 3D video, efforts are underway to incorporate it in the next generation of broadcast and streaming applications and standards. 3D video is currently available in games, entertainment, education, security, and surveillance applications. A typical scenario for multiview 3D consists of several 3D video sequences captured simultaneously from the same scene with the help of multiple cameras from different positions and through different angles. Multiview video coding provides a compact representation of these multiple views by exploiting the large amount of inter-view statistical dependencies. One of the major challenges in this field is how to transmit the large amount of data of a multiview sequence over error prone channels to heterogeneous mobile devices with different bandwidth, resolution, and processing/battery power, while maintaining a high visual quality. Scalable Multiview 3D Video Coding (SMVC) is one of the methods to address this challenge; however, the evaluation of the overall visual quality of the resulting scaled-down video requires a new objective perceptual quality measure specifically designed for scalable multiview 3D video. Although several subjective and objective quality assessment methods have been proposed for multiview 3D sequences, no comparable attempt has been made for quality assessment of scalable multiview 3D video. In this article, we propose a new methodology to build suitable objective quality assessment metrics for different scalable modalities in multiview 3D video. Our proposed methodology considers the importance of each layer and its content as a quality of experience factor in the overall quality. Furthermore, in addition to the quality of each layer, the concept of disparity between layers (inter-layer disparity) and disparity between the units of each layer (intra-layer disparity) is considered as an effective feature to evaluate overall perceived quality more accurately. Simulation results indicate that by using this methodology, more efficient objective quality assessment metrics can be introduced for each multiview 3D video scalable modalities.
Depth Reconstruction Filter and Down/Up Sampling for Depth Coding in 3-D Video A depth image represents three-dimensional (3-D) scene information and is commonly used for depth image-based rendering (DIBR) to support 3-D video and free-viewpoint video applications. The virtual view is generally rendered by the DIBR technique and its quality depends highly on the quality of depth image. Thus, efficient depth coding is crucial to realize the 3-D video system. In this letter, w...
View synthesis prediction for multiview video coding We propose a rate-distortion-optimized framework that incorporates view synthesis for improved prediction in multiview video coding. In the proposed scheme, auxiliary information, including depth data, is encoded and used at the decoder to generate the view synthesis prediction data. The proposed method employs optimal mode decision including view synthesis prediction, and sub-pixel reference matching to improve prediction accuracy of the view synthesis prediction. Novel variants of the skip and direct modes are also presented, which infer the depth and correction vector information from neighboring blocks in a synthesized reference picture to reduce the bits needed for the view synthesis prediction mode. We demonstrate two multiview video coding scenarios in which view synthesis prediction is employed. In the first scenario, the goal is to improve the coding efficiency of multiview video where block-based depths and correction vectors are encoded by CABAC in a lossless manner on a macroblock basis. A variable block-size depth/motion search algorithm is described. Experimental results demonstrate that view synthesis prediction does provide some coding gains when combined with disparity-compensated prediction. In the second scenario, the goal is to use view synthesis prediction for reducing rate overhead incurred by transmitting depth maps for improved support of 3DTV and free-viewpoint video applications. It is assumed that the complete depth map for each view is encoded separately from the multiview video and used at the receiver to generate intermediate views. We utilize this information for view synthesis prediction to improve overall coding efficiency. Experimental results show that the rate overhead incurred by coding depth maps of varying quality could be offset by utilizing the proposed view synthesis prediction techniques to reduce the bitrate required for coding multiview video.
3-D Video Representation Using Depth Maps Current 3-D video (3DV) technology is based on stereo systems. These systems use stereo video coding for pictures delivered by two input cameras. Typically, such stereo systems only reproduce these two camera views at the receiver and stereoscopic displays for multiple viewers require wearing special 3-D glasses. On the other hand, emerging autostereoscopic multiview displays emit a large numbers of views to enable 3-D viewing for multiple users without requiring 3-D glasses. For representing a large number of views, a multiview extension of stereo video coding is used, typically requiring a bit rate that is proportional to the number of views. However, since the quality improvement of multiview displays will be governed by an increase of emitted views, a format is needed that allows the generation of arbitrary numbers of views with the transmission bit rate being constant. Such a format is the combination of video signals and associated depth maps. The depth maps provide disparities associated with every sample of the video signal that can be used to render arbitrary numbers of additional views via view synthesis. This paper describes efficient coding methods for video and depth data. For the generation of views, synthesis methods are presented, which mitigate errors from depth estimation and coding.
Subjective Study On Compressed Asymmetric Stereoscopic Video Asymmetric stereoscopic video coding takes advantage of the binocular suppression of the human vision by representing one of the views with a lower quality. This paper describes a subjective quality test with asymmetric stereoscopic video. Different options for achieving compressed mixed-quality and mixed-resolution asymmetric stereo video were studied and compared to symmetric stereo video. The bitstreams for different coding arrangements were simulcast-coded according to the Advanced Video Coding (H.264/AVC) standard. The results showed that in most cases, resolution-asymmetric stereo video with the downsampling ratio of 1/2 along both coordinate axes provided similar quality as symmetric and quality-asymmetric full-resolution stereo video. These results were achieved under same bitrate constrain while the processing complexity decreased considerably. Moreover, in all test cases, the symmetric and mixed-quality full-resolution stereoscopic video bitstreams resulted in a similar quality at the same bitrates.
Transport and Storage Systems for 3-D Video Using MPEG-2 Systems, RTP, and ISO File Format Three-dimensional video based on stereo and multiview video representations is currently being introduced to the home through various channels, including broadcast such as via cable, terrestrial and satellite transmission, streaming and download through the Internet, as well as on storage media such as Blu-ray discs. In order to deliver 3-D content to the consumer, different media system technologies have been standardized or are currently under development. The most important standards are MPEG-2 systems, which is used for digital broadcast and storage on Blu-ray discs, real-time transport protocol (RTP), which is used for real-time transmissions over the Internet, and the ISO base media file format, which can be used for progressive download in video-on-demand applications. In this paper, we give an overview of these three system layer approaches, where the main focus is on the multiview video coding (MVC) extension of H.264/AVC and the application of the system approaches to the delivery and storage of MVC.
On the way towards fourth-generation mobile: 3GPP LTE and LTE-advanced Long-TermEvolution (LTE) is the new standard recently specified by the 3GPP on the way towards fourth-generation mobile. This paper presents the main technical features of this standard as well as its performance in terms of peak bit rate and average cell throughput, among others. LTE entails a big technological improvement as compared with the previous 3G standard. However, this paper also demonstrates that LTE performance does not fulfil the technical requirements established by ITU-R to classify one radio access technology as a member of the IMT-Advanced family of standards. Thus, this paper describes the procedure followed by the 3GPP to address these challenging requirements. Through the design and optimization of new radio access techniques and a further evolution of the system, the 3GPP is laying down the foundations of the future LTE-Advanced standard, the 3GPP candidate for 4G. This paper offers a brief insight into these technological trends.
Look-ahead rate adaptation algorithm for DASH under varying network environments Dynamic Adaptive Streaming over HTTP (DASH) is slowly becoming the most popular online video streaming technology. DASH enables the video player to adapt the quality of the multimedia content being downloaded in order to match the varying network conditions. The key challenge with DASH is to decide the optimal video quality for the next video segment under the current network conditions. The aim is to download the next segment before the player experiences buffer-starvation. Several rate adaptation methodologies proposed so far rely on the TCP throughput measurements and the current buffer occupancy. However, these techniques, do not consider any information regarding the next segment that is to be downloaded. They assume that the segment sizes are uniform and assign equal weights to all the segments. However, due to the video encoding techniques employed, different segments of the video with equal playback duration are found to be of different sizes. In the current paper, we propose to list the individual segment characteristics in the Media Presentation Description (MPD) file during the preprocessing stage; this is later used in the segment download time estimations. We also propose a novel rate adaptation methodology that uses the individual segment sizes in addition to the measured TCP throughput and the buffer occupancy estimate for the best video rate to be used for the next segments.
A Definition of a Nonprobabilistic Entropy in the Setting of Fuzzy Sets Theory
Combination of interval-valued fuzzy set and soft set The soft set theory, proposed by Molodtsov, can be used as a general mathematical tool for dealing with uncertainty. By combining the interval-valued fuzzy set and soft set models, the purpose of this paper is to introduce the concept of the interval-valued fuzzy soft set. The complement, ''AND'' and ''OR'' operations are defined on the interval-valued fuzzy soft sets. The DeMorgan's, associative and distribution laws of the interval-valued fuzzy soft sets are then proved. Finally, a decision problem is analyzed by the interval-valued fuzzy soft set. Some numerical examples are employed to substantiate the conceptual arguments.
On the sparseness of 1-norm support vector machines. There is some empirical evidence available showing that 1-norm Support Vector Machines (1-norm SVMs) have good sparseness; however, both how good sparseness 1-norm SVMs can reach and whether they have a sparser representation than that of standard SVMs are not clear. In this paper we take into account the sparseness of 1-norm SVMs. Two upper bounds on the number of nonzero coefficients in the decision function of 1-norm SVMs are presented. First, the number of nonzero coefficients in 1-norm SVMs is at most equal to the number of only the exact support vectors lying on the +1 and -1 discriminating surfaces, while that in standard SVMs is equal to the number of support vectors, which implies that 1-norm SVMs have better sparseness than that of standard SVMs. Second, the number of nonzero coefficients is at most equal to the rank of the sample matrix. A brief review of the geometry of linear programming and the primal steepest edge pricing simplex method are given, which allows us to provide the proof of the two upper bounds and evaluate their tightness by experiments. Experimental results on toy data sets and the UCI data sets illustrate our analysis.
Path criticality computation in parameterized statistical timing analysis This paper presents a method to compute criticality probabilities of paths in parameterized statistical static timing analysis (SSTA). We partition the set of all the paths into several groups and formulate the path criticality into a joint probability of inequalities. Before evaluating the joint probability directly, we simplify the inequalities through algebraic elimination, handling topological correlation. Our proposed method uses conditional probabilities to obtain the joint probability, and statistics of random variables representing process parameters are changed due to given conditions. To calculate the conditional statistics of the random variables, we derive analytic formulas by extending Clark's work. This allows us to obtain the conditional probability density function of a path delay, given the path is critical, as well as to compute criticality probabilities of paths. Our experimental results show that the proposed method provides 4.2X better accuracy on average in comparison to the state-of-art method.
1.030193
0.028999
0.028571
0.021751
0.009696
0.005693
0.000597
0.000061
0.000006
0
0
0
0
0
Differential RAID: rethinking RAID for SSD reliability Deployment of SSDs in enterprise settings is limited by the low erase cycles available on commodity devices. Redundancy solutions such as RAID can potentially be used to protect against the high Bit Error Rate (BER) of aging SSDs. Unfortunately, such solutions wear out redundant devices at similar rates, inducing correlated failures as arrays age in unison. We present Diff-RAID, a new RAID variant that distributes parity unevenly across SSDs to create age disparities within arrays. By doing so, Diff-RAID balances the high BER of old SSDs against the low BER of young SSDs. Diff-RAID provides much greater reliability for SSDs compared to RAID-4 and RAID-5 for the same space overhead, and offers a trade-off curve between throughput and reliability.
On efficient wear leveling for large-scale flash-memory storage systems Flash memory won its edge over many other storage media for embedded systems, because it provides better tolerance to the extreme environments which embedded systems are exposed to. In this paper, techniques referred to as wear leveling for the lengthening of flash-memory overall lifespan are considered. This paper presents the dual-pool algorithm, which realizes two key ideas: To cease the wearing of blocks by storing cold data, and to smartly leave alone blocks until wear leveling takes effect. The proposed algorithm requires no complicated tuning, and it resists changes of spatial locality in workloads. Extensive evaluation and comparison were conducted, and the merits of the proposed algorithm are justified in terms of wear-leveling performance and resource conservation.
A set-based mapping strategy for flash-memory reliability enhancement With wide applicability of flash memory in various application domains, reliability has become a very critical issue. This research is motivated by the needs to resolve the lifetime problem of flash memory and a strong demand in turning thrown-away flash-memory chips into downgraded products. We proposes a set-based mapping strategy with an effective implementation and low resource requirements, e.g., SRAM. A configurable management design and wear-leveling issue are considered. The behavior of the proposed method is also analyzed with respect to popular implementations in the industry.We show that the endurance of flash memory can be significantly improved by a series of experiments over a realistic trace. Our experiments show that the read performance is even largely improved.
A commitment-based management strategy for the performance and reliability enhancement of flash-memory storage systems Cost has been a major driving force in the development of the flash memory technology, but has also introduced serious challenges on reliability and performance for future products. In this work, we propose a commitment-based management strategy to resolve the reliability problem of many flash-memory products. A three-level address translation architecture with an adaptive block mapping mechanism is proposed to accelerate the address translation process with a limited amount of the RAM usage. Parallelism of operations over multiple chips is also explored with the considerations of the write constraints of multi-level-cell flash memory chips.
A version-based strategy for reliability enhancement of flash file systems In recent years, reliability has become one critical issue in the designs of flash file systems due to the growing unreliability of advanced flash-memory chips. In this paper, a version-based strategy with optimal space utilization is proposed to maintain the consistency among page versions of a file for potential recovery needs with the considerations of the write constraints of multi-level-cell flash memory. A series of experiments was conducted to show that the proposed strategy could improve the reliability of flash file systems with limited management and space overheads.
Statistical timing based on incomplete probabilistic descriptions of parameter uncertainty Existing approaches to timing analysis under uncertainty are based on restrictive assumptions. Statistical STA techniques assume that the full probabilistic distribution of parameter uncertainty is available; in reality, the complete probabilistic description often cannot be obtained. In this paper, a new paradigm for parameter uncertainty description is proposed as a way to consistently and rigorously handle partially available descriptions of parameter uncertainty. The paradigm is based on a theory of interval probabilistic models that permit handling uncertainty that is described in a distribution-free mode - just via the range, the mean, and the variance. This permits effectively handling multiple real-life challenges, including imprecise and limited information about the distributions of process parameters, parameters coming from different populations, and the sources of uncertainty that are too difficult to handle via full probabilistic measures (e.g. on-chip supply voltage variation). Specifically, analytical techniques for bounding the distributions of probabilistic interval variables are proposed. Besides, a provably correct strategy for fast Monte Carlo simulation based on probabilistic interval variables is introduced. A path-based timing algorithm implementing the novel modeling paradigm, as well as handling the traditional variability descriptions, has been developed. The results indicate the proposed algorithm can improve the upper bound of the 90(th)-percentile circuit delay, on average, by 5.3% across the ISCAS'85 benchmark circuits, compared to the worst-case timing estimates that use only the interval information of the partially specified parameters.
Some Defects in Finite-Difference Edge Finders This work illustrates and explains various artifacts in the output of five finite difference edge finders, those of J.F. Canny (1983, 1986), R.A. Boie et al. (1986) and R.A. Boie and I.J. Cox (1987), and three variations on that of D. Marr and E.C. Hildreth (1980), reimplemented with a common output format and method of noise suppression. These artifacts include gaps in boundaries, spurious boundaries, and deformation of region shape.
A Tutorial on Support Vector Machines for Pattern Recognition The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
Reconstruction of a low-rank matrix in the presence of Gaussian noise. This paper addresses the problem of reconstructing a low-rank signal matrix observed with additive Gaussian noise. We first establish that, under mild assumptions, one can restrict attention to orthogonally equivariant reconstruction methods, which act only on the singular values of the observed matrix and do not affect its singular vectors. Using recent results in random matrix theory, we then propose a new reconstruction method that aims to reverse the effect of the noise on the singular value decomposition of the signal matrix. In conjunction with the proposed reconstruction method we also introduce a Kolmogorov–Smirnov based estimator of the noise variance.
Proceedings of the 41th Design Automation Conference, DAC 2004, San Diego, CA, USA, June 7-11, 2004
Using polynomial chaos to compute the influence of multiple random surfers in the PageRank model The PageRank equation computes the importance of pages in a web graph relative to a single random surfer with a constant teleportation coefficient. To be globally relevant, the teleportation coefficient should account for the influence of all users. Therefore, we correct the PageRank formulation by modeling the teleportation coefficient as a random variable distributed according to user behavior. With this correction, the PageRank values themselves become random. We present two methods to quantify the uncertainty in the random PageRank: a Monte Carlo sampling algorithm and an algorithm based the truncated polynomial chaos expansion of the random quantities. With each of these methods, we compute the expectation and standard deviation of the PageRanks. Our statistical analysis shows that the standard deviation of the PageRanks are uncorrelated with the PageRank vector.
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
Fuzzy Concepts and Formal Methods: A Fuzzy Logic Toolkit for Z It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However some system problems, particularly those drawn from the IS problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper suggests fuzzy set theory as a possible representation scheme for this imprecision or approximation. We provide a summary of a toolkit that defines the operators, measures and modifiers necessary for the manipulation of fuzzy sets and relations. We also provide some examples of the laws which establishes an isomorphism between the extended notation presented here and conventional Z when applied to boolean sets and relations.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.2105
0.2105
0.2105
0.10525
0.0025
0
0
0
0
0
0
0
0
0
Automorphisms Of The Algebra Of Fuzzy Truth Values This paper is an investigation of the automorphisms of the algebra of truth values of type-2 fuzzy sets. This algebra contains isomorphic copies of the truth value algebras of type-1 and of iriterval-valued fuzzy sets. It is shown that these subalgebras are characteristic; that is, are carried onto themselves by automorphisms of the containing algebra of truth values of fuzzy sets. Some other relevant subalgebras are proved characteristic, including the subalgebra of convex normal functions. The principal tool in this study is the determination of various irreducible elements.
Development of a type-2 fuzzy proportional controller Studies have shown that PID controllers can be realized by type-1 (conventional) fuzzy logic systems (FLSs). However, the input-output mappings of such fuzzy PID controllers are fixed. The control performance would, therefore, vary if the system parameters are uncertain. This paper aims at developing a type-2 FLS to control a process whose parameters are uncertain. A method for designing type-2 triangular membership functions with the desired generalized centroid is first proposed. By using this type-2 fuzzy set to partition the output domain, a type-2 fuzzy proportional controller is obtained. It is shown that the type-2 fuzzy logic system is equivalent to a proportional controller that may assume a range of gains. Simulation results are presented to demonstrate that the performance of the proposed controller can be maintained even when the system parameters deviate from their nominal values.
Some general comments on fuzzy sets of type-2 This paper contains some general comments on the algebra of truth values of fuzzy sets of type 2. It details the precise mathematical relationship with the algebras of truth values of ordinary fuzzy sets and of interval-valued fuzzy sets. Subalgebras of the algebra of truth values and t-norms on them are discussed. There is some discussion of finite type-2 fuzzy sets. © 2008 Wiley Periodicals, Inc.
Sensed Signal Strength Forecasting for Wireless Sensors Using Interval Type-2 Fuzzy Logic System. In this paper, we present a new approach for sensed signal strength forecasting in wireless sensors using interval type-2 fuzzy logic system (FLS). We show that a type-2 fuzzy membership function, i.e., a Gaussian MF with uncertain mean is most appropriate to model the sensed signal strength of wireless sensors. We demonstrate that the sensed signals of wireless sensors are self-similar, which means it can be forecasted. An interval type-2 FLS is designed for sensed signal forecasting and is compared against a type-1 FLS. Simulation results show that the interval type-2 FLS performs much better than the type-1 FLS in sensed signal forecasting. This application can be further used for power on/off control in wireless sensors to save battery energy.
T-Norms for Type-2 Fuzzy Sets This paper is concerned with the definition of t-norms on the algebra of truth values of type-2 fuzzy sets. Our proposed definition extends the definition of ordinary t-norms on the unit interval and extends our definition of t-norms on the algebra of truth values for interval-valued fuzzy sets.
Pattern recognition using type-II fuzzy sets Type II fuzzy sets are a generalization of the ordinary fuzzy sets in which the membership value for each member of the set is itself a fuzzy set in [0, 1]. We introduce a similarity measure for measuring the similarity, or compatibility, between two type-II fuzzy sets. With this new similarity measure we show that type-II fuzzy sets provide us with a natural language for formulating classification problems in pattern recognition.
Xor-Implications and E-Implications: Classes of Fuzzy Implications Based on Fuzzy Xor The main contribution of this paper is to introduce an autonomous definition of the connective ''fuzzy exclusive or'' (fuzzy Xor, for short), which is independent from others connectives. Also, two canonical definitions of the connective Xor are obtained from the composition of fuzzy connectives, and based on the commutative and associative properties related to the notions of triangular norms, triangular conorms and fuzzy negations. We show that the main properties of the classical connective Xor are preserved by the connective fuzzy Xor, and, therefore, this new definition of the connective fuzzy Xor extends the related classical approach. The definitions of fuzzy Xor-implications and fuzzy E-implications, induced by the fuzzy Xor connective, are also studied, and their main properties are analyzed. The relationships between the fuzzy Xor-implications and the fuzzy E-implications with automorphisms are explored.
Multivariate modeling and type-2 fuzzy sets This paper explores the link between type-2 fuzzy sets and multivariate modeling. Elements of a space X are treated as observations fuzzily associated with values in a multivariate feature space. A category or class is likewise treated as a fuzzy allocation of feature values (possibly dependent on values in X). We observe that a type-2 fuzzy set on X generated by these two fuzzy allocations captures imprecision in the class definition and imprecision in the observations. In practice many type-2 fuzzy sets are in fact generated in this way and can therefore be interpreted as the output of a classification task. We then show that an arbitrary type-2 fuzzy set can be so constructed, by taking as a feature space a set of membership functions on X. This construction presents a new perspective on the Representation Theorem of Mendel and John. The multivariate modeling underpinning the type-2 fuzzy sets can also constrain realizable forms of membership functions. Because averaging operators such as centroid and subsethood on type-2 fuzzy sets involve a search for optima over membership functions, constraining this search can make computation easier and tighten the results. We demonstrate how the construction can be used to combine representations of concepts and how it therefore provides an additional tool, alongside standard operations such as intersection and subsethood, for concept fusion and computing with words.
A comparative study of ranking methods, similarity measures and uncertainty measures for interval type-2 fuzzy sets Ranking methods, similarity measures and uncertainty measures are very important concepts for interval type-2 fuzzy sets (IT2 FSs). So far, there is only one ranking method for such sets, whereas there are many similarity and uncertainty measures. A new ranking method and a new similarity measure for IT2 FSs are proposed in this paper. All these ranking methods, similarity measures and uncertainty measures are compared based on real survey data and then the most suitable ranking method, similarity measure and uncertainty measure that can be used in the computing with words paradigm are suggested. The results are useful in understanding the uncertainties associated with linguistic terms and hence how to use them effectively in survey design and linguistic information processing.
Similarity Measures Between Type-2 Fuzzy Sets In this paper, we give similarity measures between type-2 fuzzy sets and provide the axiom definition and properties of these measures. For practical use, we show how to compute the similarities between Gaussian type-2 fuzzy sets. Yang and Shih's [22] algorithm, a clustering method based on fuzzy relations by beginning with a similarity matrix, is applied to these Gaussian type-2 fuzzy sets by beginning with these similarities. The clustering results are reasonable consisting of a hierarchical tree according to different levels.
Structure segmentation and recognition in images guided by structural constraint propagation In some application domains, such as medical imaging, the objects that compose the scene are known as well as some of their properties and their spatial arrangement. We can take advantage of this knowledge to perform the segmentation and recognition of structures in medical images. We propose here to formalize this problem as a constraint network and we perform the segmentation and recognition by iterative domain reductions, the domains being sets of regions. For computational purposes we represent the domains by their upper and lower bounds and we iteratively reduce the domains by updating their bounds. We show some preliminary results on normal and pathological brain images.
Sublinear time, measurement-optimal, sparse recovery for all An approximate sparse recovery system in l1 norm makes a small number of measurements of a noisy vector with at most k large entries and recovers those heavy hitters approximately. Formally, it consists of parameters N, k, ε, an m-by-N measurement matrix, φ, and a decoding algorithm, D. Given a vector, x, where xk denotes the optimal k-term approximation to x, the system approximates x by [EQUATION], which must satisfy [EQUATION] Among the goals in designing such systems are minimizing the number m of measurements and the runtime of the decoding algorithm, D. We consider the "forall" model, in which a single matrix φ, possibly "constructed" non-explicitly using the probabilistic method, is used for all signals x. Many previous papers have provided algorithms for this problem. But all such algorithms that use the optimal number m = O(k log(N/k)) of measurements require superlinear time Ω(N log(N/k)). In this paper, we give the first algorithm for this problem that uses the optimum number of measurements (up to constant factors) and runs in sublinear time o(N) when k is sufficiently less than N. Specifically, for any positive integer l, our approach uses time O(l5ε-3k(N/k)1/l) and uses m = O(l8ε-3k log(N/k)) measurements, with access to a data structure requiring space and preprocessing time O(lNk0.2/ε).
Parallel Opportunistic Routing in Wireless Networks We study benefits of opportunistic routing in a large wireless ad hoc network by examining how the power, delay, and total throughput scale as the number of source–destination pairs increases up to the operating maximum. Our opportunistic routing is novel in a sense that it is massively parallel, i.e., it is performed by many nodes simultaneously to maximize the opportunistic gain while controlling the interuser interference. The scaling behavior of conventional multihop transmission that does not employ opportunistic routing is also examined for comparison. Our main results indicate that our opportunistic routing can exhibit a net improvement in overall power–delay tradeoff over the conventional routing by providing up to a logarithmic boost in the scaling law. Such a gain is possible since the receivers can tolerate more interference due to the increased received signal power provided by the multi user diversity gain, which means that having more simultaneous transmissions is possible.
Evaluating process performance based on the incapability index for measurements with uncertainty Process capability indices are widely used in industry to measure the ability of firms or their suppliers to meet quality specifications. The index C"P"P, which is easy to use and analytically tractable, has been successfully developed and applied by competitive firms to dominate highly-profitable markets by improving quality and productivity. Hypothesis testing is very essential for practical decision-making. Generally, the underlying data are assumed to be precise numbers, but in general it is much more realistic to consider fuzzy values, which are imprecise numbers. In this case, the test statistic also yields an imprecise number, and decision rules based on the crisp-based approach are inappropriate. This study investigates the situation of uncertain or imprecise product quality measurements. A set of confidence intervals for sample mean and variance is used to produce triangular fuzzy numbers for estimating the C"P"P index. Based on the @d-cuts of the fuzzy estimators, a decision testing rule and procedure are developed to evaluate process performance based on critical values and fuzzy p-values. An efficient computer program is also designed for calculating fuzzy p-values. Finally, an example is examined for demonstrating the application of the proposed approach.
1.022465
0.024448
0.019227
0.014641
0.008308
0.001918
0.000234
0.000087
0.000042
0.000014
0.000001
0
0
0
Fuzzy decision making with immediate probabilities We developed a new decision-making model with probabilistic information and used the concept of the immediate probability to aggregate the information. This type of probability modifies the objective probability by introducing the attitudinal character of the decision maker. In doing so, we use the ordered weighting average (OWA) operator. When using this model, it is assumed that the information is given by exact numbers. However, this may not be the real situation found within the decision-making problem. Sometimes, the information is vague or imprecise and it is necessary to use another approach to assess the information, such as the use of fuzzy numbers. Then, the decision-making problem can be represented more completely because we now consider the best and worst possible scenarios, along with the possibility that some intermediate event (an internal value) will occur. We will use the fuzzy ordered weighted averaging (FOWA) operator to aggregate the information with the probabilities. As a result, we will get the Immediate Probability-FOWA (IP-FOWA) operator. We will study some of its main properties. We will apply the new approach in a decision-making problem about selection of strategies.
Comparing approximate reasoning and probabilistic reasoning using the Dempster--Shafer framework We investigate the problem of inferring information about the value of a variable V from its relationship with another variable U and information about U. We consider two approaches, one using the fuzzy set based theory of approximate reasoning and the other using probabilistic reasoning. Both of these approaches allow the inclusion of imprecise granular type information. The inferred values from each of these methods are then represented using a Dempster-Shafer belief structure. We then compare these values and show an underling unity between these two approaches.
FIOWHM operator and its application to multiple attribute group decision making To study the problem of multiple attribute decision making in which the decision making information values are triangular fuzzy number, a new group decision making method is proposed. Then the calculation steps to solve it are given. As the key step, a new operator called fuzzy induced ordered weighted harmonic mean (FIOWHM) operator is proposed and a method based on the fuzzy weighted harmonic mean (FWHM) operator and FIOWHM operators for fuzzy MAGDM is presented. The priority based on possibility degree for the fuzzy multiple attribute decision making problem is proposed. At last, a numerical example is provided to illustrate the proposed method. The result shows the approach is simple, effective and easy to calculate.
A Method Based on OWA Operator and Distance Measures for Multiple Attribute Decision Making with 2-Tuple Linguistic Information In this paper we develop a new method for 2-tuple linguistic multiple attribute decision making, namely the 2-tuple linguistic generalized ordered weighted averaging distance (2LGOWAD) operator. This operator is an extension of the OWA operator that utilizes generalized means, distance measures and uncertain information represented as 2-tuple linguistic variables. By using 2LGOWAD, it is possible to obtain a wide range of 2-tuple linguistic aggregation distance operators such as the 2-tuple linguistic maximum distance, the 2-tuple linguistic minimum distance, the 2-tuple linguistic normalized Hamming distance (2LNHD), the 2-tuple linguistic weighted Hamming distance (2LWHD), the 2-tuple linguistic normalized Euclidean distance (2LNED), the 2-tuple linguistic weighted Euclidean distance (2LWED), the 2-tuple linguistic ordered weighted averaging distance (2LOWAD) operator and the 2-tuple linguistic Euclidean ordered weighted averaging distance (2LEOWAD) operator. We study some of its main properties, and we further generalize the 2LGOWAD operator using quasi-arithmetic means. The result is the Quasi-2LOWAD operator. Finally we present an application of the developed operators to decision-making regarding the selection of investment strategies.
Fuzzy induced generalized aggregation operators and its application in multi-person decision making We present a wide range of fuzzy induced generalized aggregation operators such as the fuzzy induced generalized ordered weighted averaging (FIGOWA) and the fuzzy induced quasi-arithmetic OWA (Quasi-FIOWA) operator. They are aggregation operators that use the main characteristics of the fuzzy OWA (FOWA) operator, the induced OWA (IOWA) operator and the generalized (or quasi-arithmetic) OWA operator. Therefore, they use uncertain information represented in the form of fuzzy numbers, generalized (or quasi-arithmetic) means and order inducing variables. The main advantage of these operators is that they include a wide range of mean operators such as the FOWA, the IOWA, the induced Quasi-OWA, the fuzzy IOWA, the fuzzy generalized mean and the fuzzy weighted quasi-arithmetic average (Quasi-FWA). We further generalize this approach by using Choquet integrals, obtaining the fuzzy induced quasi-arithmetic Choquet integral aggregation (Quasi-FICIA) operator. We also develop an application of the new approach in a strategic multi-person decision making problem.
Decision making with extended fuzzy linguistic computing, with applications to new product development and survey analysis Fuzzy set theory, with its ability to capture and process uncertainties and vagueness inherent in subjective human reasoning, has been under continuous development since its introduction in the 1960s. Recently, the 2-tuple fuzzy linguistic computing has been proposed as a methodology to aggregate fuzzy opinions ( Herrera & Martinez, 2000a, 2000b ), for example, in the evaluation of new product development performance ( Wang, 2009 ) and in customer satisfactory level survey analysis ( Lin & Lee, 2009 ). The 2-tuple fuzzy linguistic approach has the advantage of avoiding information loss that can potentially occur when combining opinions of experts. Given the fuzzy ratings of the evaluators, the computation procedure used in both Wang (2009) and Lin and Lee (2009) returned a single crisp value as an output, representing the average judgment of those evaluators. In this article, we take an alternative view that the result of aggregating fuzzy ratings should be fuzzy itself, and therefore we further develop the 2-tuple fuzzy linguistic methodology so that its output is a fuzzy number describing the aggregation of opinions. We demonstrate the utility of the extended fuzzy linguistic computing methodology by applying it to two data sets: (i) the evaluation of a new product idea in a Taiwanese electronics manufacturing firm and (ii) the evaluation of the investment benefit of a proposed facility site.
A sequential selection process in group decision making with a linguistic assessment approach In this paper a Sequential Selection Process in Group Decision Making underlinguistic assessments is presented, where a set of linguistic preference relationsrepresents individuals preferences. A collective linguistic preference is obtained bymeans of a defined linguistic ordered weighted averaging operator whose weightsare chosen according to the concept of fuzzy majority, specified by a fuzzy linguisticquantifier. Then we define the concepts of linguistic nondominance, linguistic...
Fuzzy multiple criteria forestry decision making based on an integrated VIKOR and AHP approach Forestation and forest preservation in urban watersheds are issues of vital importance as forested watersheds not only preserve the water supplies of a city but also contribute to soil erosion prevention. The use of fuzzy multiple criteria decision aid (MCDA) in urban forestation has the advantage of rendering subjective and implicit decision making more objective and transparent. An additional merit of fuzzy MCDA is its ability to accommodate quantitative and qualitative data. In this paper an integrated VIKOR-AHP methodology is proposed to make a selection among the alternative forestation areas in Istanbul. In the proposed methodology, the weights of the selection criteria are determined by fuzzy pairwise comparison matrices of AHP. It is found that Omerli watershed is the most appropriate forestation district in Istanbul.
Web-based Multi-Criteria Group Decision Support System with Linguistic Term Processing Function Organizational decisions are often made in groups where group members may be distributed geographically in different locations. Furthermore, a decision-making process, in practice, frequently involves various uncertain factors including linguistic expressions of decision makers' preferences and opinions. This study first proposes a rational-political group decision-making model which identifies three uncertain factors involved in a group decision-making process: decision makers' roles in a group reaching a satisfactory solution, preferences for alternatives and judgments for assessment-criteria. Based on the model, a linguistic term oriented multi-criteria group decision-making method is developed. The method uses general fuzzy number to deal with the three uncertain factors described by linguistic terms and aggregates these factors into a group satisfactory decision that is in a most acceptable degree of the group. Moreover, this study implements the method by developing a web-based group decision support system. This system allows decision makers to participate a group decision-making through the web, and manages the group decision-making process as a whole, from criteria generation, alternative evaluation, opinions interaction to decision aggregation. Finally, an application of the system is presented to illustrate the web-based group decision support system.
Perceptual reasoning for perceptual computing: a similarity-based approach Perceptual reasoning (PR) is an approximate reasoning method that can be used as a computing-with-words (CWW) engine in perceptual computing. There can be different approaches to implement PR, e.g., firing-interval-based PR (FI-PR), which has been proposed in J. M. Mendel and D. Wu, IEEE Trans. Fuzzy Syst., vol. 16, no. 6, pp. 1550-1564, Dec. 2008 and similarity-based PR (SPR), which is proposed in this paper. Both approaches satisfy the requirement on a CWW engine that the result of combining fired rules should lead to a footprint of uncertainty (FOU) that resembles the three kinds of FOUs in a CWW codebook. A comparative study shows that S-PR leads to output FOUs that resemble word FOUs, which are obtained from subject data, much more closely than FI-PR; hence, S-PR is a better choice for a CWW engine than FI-PR.
Systematic image processing for diagnosing brain tumors: A Type-II fuzzy expert system approach This paper presents a systematic Type-II fuzzy expert system for diagnosing the human brain tumors (Astrocytoma tumors) using T"1-weighted Magnetic Resonance Images with contrast. The proposed Type-II fuzzy image processing method has four distinct modules: Pre-processing, Segmentation, Feature Extraction, and Approximate Reasoning. We develop a fuzzy rule base by aggregating the existing filtering methods for Pre-processing step. For Segmentation step, we extend the Possibilistic C-Mean (PCM) method by using the Type-II fuzzy concepts, Mahalanobis distance, and Kwon validity index. Feature Extraction is done by Thresholding method. Finally, we develop a Type-II Approximate Reasoning method to recognize the tumor grade in brain MRI. The proposed Type-II expert system has been tested and validated to show its accuracy in the real world. The results show that the proposed system is superior in recognizing the brain tumor and its grade than Type-I fuzzy expert systems.
Gossip Algorithms for Distributed Signal Processing Gossip algorithms are attractive for in-network processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and...
Sublinear compressive sensing reconstruction via belief propagation decoding We propose a new compressive sensing scheme, based on codes of graphs, that allows for joint design of sensing matrices and low complexity reconstruction algorithms. The compressive sensing matrices can be shown to offer asymptotically optimal performance when used in combination with OMP methods. For more elaborate greedy reconstruction schemes, we propose a new family of list decoding and multiple-basis belief propagation algorithms. Our simulation results indicate that the proposed CS scheme offers good complexity-performance tradeoffs for several classes of sparse signals.
Thermal switching error versus delay tradeoffs in clocked QCA circuits The quantum-dot cellular automata (QCA) model offers a novel nano-domain computing architecture by mapping the intended logic onto the lowest energy configuration of a collection of QCA cells, each with two possible ground states. A four-phased clocking scheme has been suggested to keep the computations at the ground state throughout the circuit. This clocking scheme, however, induces latency or delay in the transmission of information from input to output. In this paper, we study the interplay of computing error behavior with delay or latency of computation induced by the clocking scheme. Computing errors in QCA circuits can arise due to the failure of the clocking scheme to switch portions of the circuit to the ground state with change in input. Some of these non-ground states will result in output errors and some will not. The larger the size of each clocking zone, i.e., the greater the number of cells in each zone, the more the probability of computing errors. However, larger clocking zones imply faster propagation of information from input to output, i.e., reduced delay. Current QCA simulators compute just the ground state configuration of a QCA arrangement. In this paper, we offer an efficient method to compute the N-lowest energy modes of a clocked QCA circuit. We model the QCA cell arrangement in each zone using a graph-based probabilistic model, which is then transformed into a Markov tree structure defined over subsets of QCA cells. This tree structure allows us to compute the N-lowest energy configurations in an efficient manner by local message passing. We analyze the complexity of the model and show it to be polynomial in terms of the number of cells, assuming a finite neighborhood of influence for each QCA cell, which is usually the case. The overall low-energy spectrum of multiple clocking zones is constructed by concatenating the low-energy spectra of the individual clocking zones. We demonstrate how the model can be used to study the tradeoff betwee- - n switching errors and clocking zones.
1.021808
0.024
0.022362
0.020677
0.007839
0.003506
0.001297
0.000357
0.000171
0.000078
0.000001
0
0
0
Uncertainty quantification of electronic and photonic ICs with non-Gaussian correlated process variations Since the invention of generalized polynomial chaos in 2002, uncertainty quantification has impacted many engineering fields, including variation-aware design automation of integrated circuits and integrated photonics. Due to the fast convergence rate, the generalized polynomial chaos expansion has achieved orders-of-magnitude speedup than Monte Carlo in many applications. However, almost all existing generalized polynomial chaos methods have a strong assumption: the uncertain parameters are mutually independent or Gaussian correlated. This assumption rarely holds in many realistic applications, and it has been a long-standing challenge for both theorists and practitioners. This paper propose a rigorous and efficient solution to address the challenge of non-Gaussian correlation. We first extend generalized polynomial chaos, and propose a class of smooth basis functions to efficiently handle non-Gaussian correlations. Then, we consider high-dimensional parameters, and develop a scalable tensor method to compute the proposed basis functions. Finally, we develop a sparse solver with adaptive sample selections to solve high-dimensional uncertainty quantification problems. We validate our theory and algorithm by electronic and photonic ICs with 19 to 57 non-Gaussian correlated variation parameters. The results show that our approach outperforms Monte Carlo by 2500× to 3000× in terms of efficiency. Moreover, our method can accurately predict the output density functions with multiple peaks caused by non-Gaussian correlations, which is hard to handle by existing methods. Based on the results in this paper, many novel uncertainty quantification algorithms can be developed and can be further applied to a broad range of engineering domains.
Multi-Wafer Virtual Probe: Minimum-cost variation characterization by exploring wafer-to-wafer correlation In this paper, we propose a new technique, referred to as Multi-Wafer Virtual Probe (MVP) to efficiently model wafer-level spatial variations for nanoscale integrated circuits. Towards this goal, a novel Bayesian inference is derived to extract a shared model template to explore the wafer-to-wafer correlation information within the same lot. In addition, a robust regression algorithm is proposed to automatically detect and remove outliers (i.e., abnormal measurement data with large error) so that they do not bias the modeling results. The proposed MVP method is extensively tested for silicon measurement data collected from 200 wafers at an advanced technology node. Our experimental results demonstrate that MVP offers superior accuracy over other traditional approaches such as VP and EM, if a limited number of measurement data are available.
Bayesian Model Fusion: A statistical framework for efficient pre-silicon validation and post-silicon tuning of complex analog and mixed-signal circuits In this paper, we describe a novel statistical framework, referred to as Bayesian Model Fusion (BMF), that allows us to minimize the simulation and/or measurement cost for both pre-silicon validation and post-silicon tuning of analog and mixed-signal (AMS) circuits with consideration of large-scale process variations. The BMF technique is motivated by the fact that today's AMS design cycle typically spans multiple stages (e.g., schematic design, layout design, first tape-out, second tape-out, etc.). Hence, we can reuse the simulation and/or measurement data collected at an early stage to facilitate efficient validation and tuning of AMS circuits with a minimal amount of data at the late stage. The efficacy of BMF is demonstrated by using several industrial circuit examples.
Tensor Computation: A New Framework for High-Dimensional Problems in EDA. Many critical electronic design automation (EDA) problems suffer from the curse of dimensionality, i.e., the very fast-scaling computational burden produced by large number of parameters and/or unknown variables. This phenomenon may be caused by multiple spatial or temporal factors (e.g., 3-D field solvers discretizations and multirate circuit simulation), nonlinearity of devices and circuits, large number of design or optimization parameters (e.g., full-chip routing/placement and circuit sizing), or extensive process variations (e.g., variability /reliability analysis and design for manufacturability). The computational challenges generated by such high-dimensional problems are generally hard to handle efficiently with traditional EDA core algorithms that are based on matrix and vector computation. This paper presents “tensor computation” as an alternative general framework for the development of efficient EDA algorithms and tools. A tensor is a high-dimensional generalization of a matrix and a vector, and is a natural choice for both storing and solving efficiently high-dimensional EDA problems. This paper gives a basic tutorial on tensors, demonstrates some recent examples of EDA applications (e.g., nonlinear circuit modeling and high-dimensional uncertainty quantification), and suggests further open EDA problems where the use of tensor computation could be of advantage.
Stochastic Testing Method for Transistor-Level Uncertainty Quantification Based on Generalized Polynomial Chaos Uncertainties have become a major concern in integrated circuit design. In order to avoid the huge number of repeated simulations in conventional Monte Carlo flows, this paper presents an intrusive spectral simulator for statistical circuit analysis. Our simulator employs the recently developed generalized polynomial chaos expansion to perform uncertainty quantification of nonlinear transistor circuits with both Gaussian and non-Gaussian random parameters. We modify the nonintrusive stochastic collocation (SC) method and develop an intrusive variant called stochastic testing (ST) method. Compared with the popular intrusive stochastic Galerkin (SG) method, the coupled deterministic equations resulting from our proposed ST method can be solved in a decoupled manner at each time point. At the same time, ST requires fewer samples and allows more flexible time step size controls than directly using a nonintrusive SC solver. These two properties make ST more efficient than SG and than existing SC methods, and more suitable for time-domain circuit simulation. Simulation results of several digital, analog and RF circuits are reported. Since our algorithm is based on generic mathematical models, the proposed ST algorithm can be applied to many other engineering problems.
Semantics of Context-Free Languages Meaning" may be assigned to a string in a context-free language by defining "at- tributes" of the symbols in a derivation tree for that string. The attributes can be de- fined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are "synthe- sized", i.e., defined solely in terms of attributes of the descendants of the correspond- ing nonterminal symbol, while other attributes are "inherited", i.e., defined in terms of attributes of the ancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature. A simple technique for specifying the "meaning" of languages defined by context-free grammars is introduced in Section 1 of this paper, and its basic mathematical properties are investigated in Sections 2 and 3. An example which indicates how the technique can be applied to the formal definition of programming languages is described in Section 4, and finally, Section 5 contains a somewhat biased comparison of the present method to other known techniques for semantic definition. The discussion in this paper is oriented primarily towards programming languages, but the same methods appear to be relevant also in the study of natural languages. 1. Introduction. Let us st/ppose that we want to give a precise defini- tion of binary notation for numbers. This can be done in many ways, and in this section we want to consider a manner o f definition which can be gen- eralized so that the meaning of other notations can be expressed in the same way. One such way to define binary notation is to base a definition on
Sets with type-2 operations The algebra of truth values of type-2 fuzzy sets consists of all mappings of the unit interval to itself, with type-2 operations that are convolutions of ordinary max and min operations. This paper is concerned with a special subalgebra of this truth value algebra, namely the set of nonzero functions with values in the two-element set {0,1}. This algebra can be identified with the set of all non-empty subsets of the unit interval, but the operations are not the usual union and intersection. We give simplified descriptions of the operations and derive the basic algebraic properties of this algebra, including the identification of its automorphism group. We also discuss some subalgebras and homomorphisms between them and look briefly at t-norms on this algebra of sets.
Fuzzy logic systems for engineering: a tutorial A fuzzy logic system (FLS) is unique in that it is able to simultaneously handle numerical data and linguistic knowledge. It is a nonlinear mapping of an input data (feature) vector into a scalar output, i.e., it maps numbers into numbers. Fuzzy set theory and fuzzy logic establish the specifics of the nonlinear mapping. This tutorial paper provides a guided tour through those aspects of fuzzy sets and fuzzy logic that are necessary to synthesize an FLS. It does this by starting with crisp set theory and dual logic and demonstrating how both can be extended to their fuzzy counterparts. Because engineering systems are, for the most part, causal, we impose causality as a constraint on the development of the FLS. After synthesizing a FLS, we demonstrate that it can be expressed mathematically as a linear combination of fuzzy basis functions, and is a nonlinear universal function approximator, a property that it shares with feedforward neural networks. The fuzzy basis function expansion is very powerful because its basis functions can be derived from either numerical data or linguistic knowledge, both of which can be cast into the forms of IF-THEN rules
Convergence Rates of Best N-term Galerkin Approximations for a Class of Elliptic sPDEs Deterministic Galerkin approximations of a class of second order elliptic PDEs with random coefficients on a bounded domain D⊂ℝd are introduced and their convergence rates are estimated. The approximations are based on expansions of the random diffusion coefficients in L 2(D)-orthogonal bases, and on viewing the coefficients of these expansions as random parameters y=y(ω)=(y i (ω)). This yields an equivalent parametric deterministic PDE whose solution u(x,y) is a function of both the space variable x∈D and the in general countably many parameters y. We establish new regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to $V=H^{1}_{0}(D)$. These results lead to analytic estimates on the V norms of the coefficients (which are functions of x) in a so-called “generalized polynomial chaos” (gpc) expansion of u. Convergence estimates of approximations of u by best N-term truncated V valued polynomials in the variable y∈U are established. These estimates are of the form N −r , where the rate of convergence r depends only on the decay of the random input expansion. It is shown that r exceeds the benchmark rate 1/2 afforded by Monte Carlo simulations with N “samples” (i.e., deterministic solves) under mild smoothness conditions on the random diffusion coefficients. A class of fully discrete approximations is obtained by Galerkin approximation from a hierarchic family $\{V_{l}\}_{l=0}^{\infty}\subset V$of finite element spaces in D of the coefficients in the N-term truncated gpc expansions of u(x,y). In contrast to previous works, the level l of spatial resolution is adapted to the gpc coefficient. New regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to a smoothness space W⊂V are established leading to analytic estimates on the W norms of the gpc coefficients and on their space discretization error. The space W coincides with $H^{2}(D)\cap H^{1}_{0}(D)$in the case where D is a smooth or convex domain. Our analysis shows that in realistic settings a convergence rate $N_{\mathrm{dof}}^{-s}$in terms of the total number of degrees of freedom N dof can be obtained. Here the rate s is determined by both the best N-term approximation rate r and the approximation order of the space discretization in D.
Proceedings of the 47th Design Automation Conference, DAC 2010, Anaheim, California, USA, July 13-18, 2010
Restricted Eigenvalue Properties for Correlated Gaussian Designs Methods based on l1-relaxation, such as basis pursuit and the Lasso, are very popular for sparse regression in high dimensions. The conditions for success of these methods are now well-understood: (1) exact recovery in the noiseless setting is possible if and only if the design matrix X satisfies the restricted nullspace property, and (2) the squared l2-error of a Lasso estimate decays at the minimax optimal rate k log p / n, where k is the sparsity of the p-dimensional regression problem with additive Gaussian noise, whenever the design satisfies a restricted eigenvalue condition. The key issue is thus to determine when the design matrix X satisfies these desirable properties. Thus far, there have been numerous results showing that the restricted isometry property, which implies both the restricted nullspace and eigenvalue conditions, is satisfied when all entries of X are independent and identically distributed (i.i.d.), or the rows are unitary. This paper proves directly that the restricted nullspace and eigenvalue conditions hold with high probability for quite general classes of Gaussian matrices for which the predictors may be highly dependent, and hence restricted isometry conditions can be violated with high probability. In this way, our results extend the attractive theoretical guarantees on l1-relaxations to a much broader class of problems than the case of completely independent or unitary designs.
A Simple Compressive Sensing Algorithm for Parallel Many-Core Architectures In this paper we consider the l 1-compressive sensing problem. We propose an algorithm specifically designed to take advantage of shared memory, vectorized, parallel and many-core microprocessors such as the Cell processor, new generation Graphics Processing Units (GPUs) and standard vectorized multi-core processors (e.g. quad-core CPUs). Besides its implementation is easy. We also give evidence of the efficiency of our approach and compare the algorithm on the three platforms, thus exhibiting pros and cons for each of them.
A fuzzy CBR technique for generating product ideas This paper presents a fuzzy CBR (case-based reasoning) technique for generating new product ideas from a product database for enhancing the functions of a given product (called the baseline product). In the database, a product is modeled by a 100-attribute vector, 87 of which are used to model the use-scenario and 13 are used to describe the manufacturing/recycling features. Based on the use-scenario attributes and their relative weights - determined by a fuzzy AHP technique, a fuzzy CBR retrieving mechanism is developed to retrieve product-ideas that tend to enhance the functions of the baseline product. Based on the manufacturing/recycling features, a fuzzy CBR mechanism is developed to screen the retrieved product ideas in order to obtain a higher ratio of valuable product ideas. Experiments indicate that the retrieving-and-filtering mechanism outperforms the prior retrieving-only mechanism in terms of generating a higher ratio of valuable product ideas.
The laws of large numbers for fuzzy random variables The new attempt of weak and strong law of large numbers for fuzzy random variables is discussed in this paper by proposing the convergence in probability and convergence with probability one for fuzzy random variables. We first consider the limit properties of fuzzy numbers by invoking the Hausdorff metric, and then we extend it to the convergence in probability and convergence with probability one for fuzzy random variables. We provide the notion of weak and strong convergence in probability and weak and strong convergence with probability one for fuzzy random variables. Finally we come up with the weak and strong law of large numbers for fuzzy random variables in weak and strong sense. (C) 2000 Elsevier Science B.V. All rights reserved.
1.1
0.05
0.033333
0.02
0.008696
0
0
0
0
0
0
0
0
0
Exploring Strategies for Training Deep Neural Networks Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization often appears to get stuck in poor solutions. Hinton et al. recently proposed a greedy layer-wise unsupervised learning procedure relying on the training algorithm of restricted Boltzmann machines (RBM) to initialize the parameters of a deep belief network (DBN), a generative model with many layers of hidden causal variables. This was followed by the proposal of another greedy layer-wise procedure, relying on the usage of autoassociator networks. In the context of the above optimization problem, we study these algorithms empirically to better understand their success. Our experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy helps the optimization by initializing weights in a region near a good local minimum, but also implicitly acts as a sort of regularization that brings better generalization and encourages internal distributed representations that are high-level abstractions of the input. We also present a series of experiments aimed at evaluating the link between the performance of deep neural networks and practical aspects of their topology, for example, demonstrating cases where the addition of more depth helps. Finally, we empirically explore simple variants of these training algorithms, such as the use of different RBM input unit distributions, a simple way of combining gradient estimators to improve performance, as well as on-line versions of those algorithms.
Towards situated speech understanding: visual context priming of language models Fuse is a situated spoken language understanding system that uses visual context to steer the interpretation of speech. Given a visual scene and a spoken description, the system finds the object in the scene that best fits the meaning of the description. To solve this task, Fuse performs speech recognition and visually-grounded language understanding. Rather than treat these two problems separately, knowledge of the visual semantics of language and the specific contents of the visual scene are fused during speech processing. As a result, the system anticipates various ways a person might describe any object in the scene, and uses these predictions to bias the speech recognizer towards likely sequences of words. A dynamic visual attention mechanism is used to focus processing on likely objects within the scene as spoken utterances are processed. Visual attention and language prediction reinforce one another and converge on interpretations of incoming speech signals which are most consistent with visual context. In evaluations, the introduction of visual context into the speech recognition process results in significantly improved speech recognition and understanding accuracy. The underlying principles of this model may be applied to a wide range of speech understanding problems including mobile and assistive technologies in which contextual information can be sensed and semantically interpreted to bias processing.
Embodied Language Understanding with a Multiple Timescale Recurrent Neural Network How the human brain understands natural language and what we can learn for intelligent systems is open research. Recently, researchers claimed that language is embodied in most — if not all — sensory and sensorimotor modalities and that the brain's architecture favours the emergence of language. In this paper we investigate the characteristics of such an architecture and propose a model based on the Multiple Timescale Recurrent Neural Network, extended by embodied visual perception. We show that such an architecture can learn the meaning of utterances with respect to visual perception and that it can produce verbal utterances that correctly describe previously unknown scenes.
The hierarchical and functional connectivity of higher-order cognitive mechanisms: neurorobotic model to investigate the stability and flexibility of working memory. Higher-order cognitive mechanisms (HOCM), such as planning, cognitive branching, switching, etc., are known to be the outcomes of a unique neural organizations and dynamics between various regions of the frontal lobe. Although some recent anatomical and neuroimaging studies have shed light on the architecture underlying the formation of such mechanisms, the neural dynamics and the pathways in and between the frontal lobe to form and/or to tune the stability level of its working memory remain controversial. A model to clarify this aspect is therefore required. In this study, we propose a simple neurocomputational model that suggests the basic concept of how HOCM, including the cognitive branching and switching in particular, may mechanistically emerge from time-based neural interactions. The proposed model is constructed such that its functional and structural hierarchy mimics, to a certain degree, the biological hierarchy that is believed to exist between local regions in the frontal lobe. Thus, the hierarchy is attained not only by the force of the layout architecture of the neural connections but also through distinct types of neurons, each with different time properties. To validate the model, cognitive branching and switching tasks were simulated in a physical humanoid robot driven by the model. Results reveal that separation between the lower and the higher-level neurons in such a model is an essential factor to form an appropriate working memory to handle cognitive branching and switching. The analyses of the obtained result also illustrates that the breadth of this separation is important to determine the characteristics of the resulting memory, either static memory or dynamic memory. This work can be considered as a joint research between synthetic and empirical studies, which can open an alternative research area for better understanding of brain mechanisms.
Compressed Sensing with Coherent and Redundant Dictionaries This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an ℓ1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence restriction on the dictionary and our results may be the first of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of ℓ1-analysis for such problems.
Compressed Sensing. Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0<ples1. The N most important coefficients in that expansion allow reconstruction with lscr2 error O(N1/2-1p/). It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients. Moreover, a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing. The nonadaptive measurements have the character of "random" linear combinations of basis/frame elements. Our results use the notions of optimal recovery, of n-widths, and information-based complexity. We estimate the Gel'fand n-widths of lscrp balls in high-dimensional Euclidean space in the case 0<ples1, and give a criterion identifying near- optimal subspaces for Gel'fand n-widths. We show that "most" subspaces are near-optimal, and show that convex optimization (Basis Pursuit) is a near-optimal way to extract information derived from these near-optimal subspaces
An optimal algorithm for approximate nearest neighbor searching fixed dimensions Consider a set of S of n data points in real d-dimensional space, Rd, where distances are measured using any Minkowski metric. In nearest neighbor searching, we preprocess S into a data structure, so that given any query point q ∈ Rd, is the closest point of S to q can be reported quickly. Given any positive real &egr;, data point p is a (1 +&egr;)-approximate nearest neighbor of q if its distance from q is within a factor of (1 + &egr;) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of n points in Rd in O(dn log n) time and O(dn) space, so that given a query point q ∈ Rd, and &egr; 0, a (1 + &egr;)-approximate nearest neighbor of q can be computed in O(cd, &egr; log n) time, where cd,&egr;≤d 1 + 6d/e;d is a factor depending only on dimension and &egr;. In general, we show that given an integer k ≥ 1, (1 + &egr;)-approximations to the k nearest neighbors of q can be computed in additional O(kd log n) time.
Asymptotic Sampling Distribution for Polynomial Chaos Representation from Data: A Maximum Entropy and Fisher Information Approach A procedure is presented for characterizing the asymptotic sampling distribution of estimators of the polynomial chaos (PC) coefficients of a second-order nonstationary and non-Gaussian random process by using a collection of observations. The random process represents a physical quantity of interest, and the observations made over a finite denumerable subset of the indexing set of the random process are considered to form a set of realizations of a random vector $\mathcal{Y}$ representing a finite-dimensional projection of the random process. The Karhunen-Loève decomposition and a scaling transformation are employed to produce a reduced-order model $\mathcal{Z}$ of $\mathcal{Y}$. The PC expansion of $\mathcal{Z}$ is next determined by having recourse to the maximum-entropy principle, the Metropolis-Hastings Markov chain Monte Carlo algorithm, and the Rosenblatt transformation. The resulting PC expansion has random coefficients, where the random characteristics of the PC coefficients can be attributed to the limited data available from the experiment. The estimators of the PC coefficients of $\mathcal{Y}$ obtained from that of $\mathcal{Z}$ are found to be maximum likelihood estimators as well as consistent and asymptotically efficient. Computation of the covariance matrix of the associated asymptotic normal distribution of estimators of the PC coefficients of $\mathcal{Y}$ requires knowledge of the Fisher information matrix (FIM). The FIM is evaluated here by using a numerical integration scheme as well as a sampling technique. The resulting confidence interval on the PC coefficient estimators essentially reflects the effect of incomplete information (due to data limitation) on the characterization of the stochastic process. This asymptotic distribution is significant as its characteristics can be propagated through predictive models for which the stochastic process in question describes uncertainty on some input parameters.
On the Smolyak Cubature Error for Analytic Functions this paper, the author has been informed that Gerstner andGriebel [4] rediscovered this method. For algorithmic details, we refer to theirpaper. The resulting Smolyak cubature formulae are denoted by Q
SOS: The MOS is not enough! When it comes to analysis and interpretation of the results of subjective QoE studies, one often witnesses a lack of attention to the diversity in subjective user ratings. In extreme cases, solely Mean Opinion Scores (MOS) are reported, causing the loss of important information on the user rating diversity. In this paper, we emphasize the importance of considering the Standard deviation of Opinion Scores (SOS) and analyze important characteristics of this measure. As a result, we formulate the SOS hypothesis which postulates a square relationship between the MOS and the SOS. We demonstrate the validity and applicability of the SOS hypothesis for a wide range of studies. The main benefit of the SOS hypothesis is that it allows for a compact, yet still comprehensive statistical summary of subjective user tests. Furthermore, it supports checking the reliability of test result data sets as well as their comparability across different QoE studies.
Fuzzy Logic and the Resolution Principle The relationship between fuzzy logic and two-valued logic in the context of the first order predicate calctflus is discussed. It is proved that if every clause in a set of clauses is somethblg more than a "half-truth" and the most reliable clause has truth-value a and the most unreliable clause has truth-value b, then we are guaranteed that all the logical con- sequences obtained by repeatedly applying the resolution principle will have truth-value between a and b. The significance of this theorem is also discussed.
Machine Understanding of Natural Language
Rapid method to account for process variation in full-chip capacitance extraction Full-chip capacitance extraction programs based on lookup techniques, such as HILEX/CUP , can be enhanced to rigorously account for process variations in the dimensions of very large scale integration interconnect wires with only modest additional computational effort. HILEX/CUP extracts interconnect capacitance from layout using analytical models with reasonable accuracy. These extracted capacitances are strictly valid only for the nominal interconnect dimensions; the networked nature of capacitive relationships in dense, complex interconnect structures precludes simple extrapolations of capacitance with dimensional changes. However, the derivatives, with respect to linewidth variation of the analytical models, can be accumulated along with the capacitance itself for each interacting pair of nodes. A numerically computed derivative with respect to metal and dielectric layer thickness variation can also be accumulated. Each node pair's extracted capacitance and its gradient with respect to linewidth and thickness variation on each metal and dielectric layer can be stored in a file. Thus, instead of storing a scalar value for each extracted capacitance, a vector of 3I+1 values will be stored for capacitance and its gradient, where I is the number of metal layers. Subsequently, this gradient information can be used during circuit simulation in conjunction with any arbitrary vector of interconnect process variations to perform sensitivity analysis of circuit performance.
Construction of interval-valued fuzzy entropy invariant by translations and scalings In this paper, we propose a method to construct interval-valued fuzzy entropies (Burillo and Bustince 1996). This method uses special aggregation functions applied to interval-contrasts. In this way, we are able to construct interval-valued fuzzy entropies from automorphisms and implication operators. Finally, we study the invariance of our constructions by scaling and translation.
1.076132
0.052625
0.052625
0.052625
0.014286
0.000201
0.000011
0.000004
0
0
0
0
0
0
Matchings and transversals in hypergraphs, domination and independence-in trees A family of hypergraphs is exhibited which have the property that the minimum cardinality of a transversal is equal to the maximum cardinality of a matching. A result concerning domination and independence in trees which generalises a recent result of Meir and Moon is deduced.
Domination in intersecting hypergraphs. A matching in a hypergraph H is a set of pairwise disjoint hyperedges. The matching number α′(H) of H is the size of a maximum matching in H. A subset D of vertices of H is a dominating set of H if for every v∈V∖D there exists u∈D such that u andv lie in a hyperedge of H. The cardinality of a minimum dominating set of H is called the domination number of H, denoted by γ(H). It is known that for an intersecting hypergraph H with rank r, γ(H)≤r−1. In this paper we present structural properties on intersecting hypergraphs with rank r satisfying the equality γ(H)=r−1. By applying these properties we show that all linear intersecting hypergraphs H with rank 4 satisfying γ(H)=r−1 can be constructed by the well-known Fano plane.
Linear hypergraphs with large transversal number and maximum degree two For k=2, let H be a k-uniform hypergraph on n vertices and m edges. The transversal number @t(H) of H is the minimum number of vertices that intersect every edge. Chvatal and McDiarmid [V. Chvatal, C. McDiarmid, Small transversals in hypergraphs, Combinatorica 12 (1992) 19-26] proved that @t(H)@?(n+@?k2@?m)/(@?3k2@?). In particular, for k@?{2,3} we have that (k+1)@t(H)@?n+m. A linear hypergraph is one in which every two distinct edges of H intersect in at most one vertex. In this paper, we consider the following question posed by Henning and Yeo: Is it true that if H is linear, then (k+1)@t(H)@?n+m holds for all k=2? If k=4 and we relax the linearity constraint, then this is not always true. We show that if @D(H)@?2, then (k+1)@t(H)@?n+m does hold for all k=2 and we characterize the hypergraphs achieving equality in this bound.
Matching and domination numbers in r-uniform hypergraphs. A matching is a set of pairwise disjoint hyperedges of a hypergraph H. The matching number $$\\nu (H)$$ź(H) of H is the maximum cardinality of a matching. A subset D of vertices of H is called a dominating set of H if for every vertex v not in D there exists $$u\\in D$$uźD such that u and v are contained in a hyperedge of H. The minimum cardinality of a dominating set of H is called the domination number of H and is denoted by $$\\gamma (H)$$ź(H). In this paper we show that every r-uniform hypergraph H satisfies the inequality $$\\gamma (H)\\le (r-1)\\nu (H)$$ź(H)≤(r-1)ź(H) and the bound is sharp.
Equality of domination and transversal numbers in hypergraphs A subset S of the vertex set of a hypergraph H is called a dominating set of H if for every vertex v not in S there exists u@?S such that u and v are contained in an edge in H. The minimum cardinality of a dominating set in H is called the domination number of H and is denoted by @c(H). A transversal of a hypergraph H is defined to be a subset T of the vertex set such that T@?E0@? for every edge E of H. The transversal number of H, denoted by @t(H), is the minimum number of vertices in a transversal. A hypergraph is of rank k if each of its edges contains at most k vertices. The inequality @t(H)=@c(H) is valid for every hypergraph H without isolated vertices. In this paper, we investigate the hypergraphs satisfying @t(H)=@c(H), and prove that their recognition problem is NP-hard already on the class of linear hypergraphs of rank 3, while on unrestricted problem instances it lies inside the complexity class @Q"2^p. Structurally we focus our attention on hypergraphs in which each subhypergraph H^' without isolated vertices fulfills the equality @t(H^')=@c(H^'). We show that if each induced subhypergraph satisfies the equality then it holds for the non-induced ones as well. Moreover, we prove that for every positive integer k, there are only a finite number of forbidden subhypergraphs of rank k, and each of them has domination number at most k.
Small transversals in hypergraphs For each positive integerk, we consider the setAk of all ordered pairs [a, b] such that in everyk-graph withn vertices andm edges some set of at mostam+bn vertices meets all the edges. We show that eachAk withk=2 has infinitely many extreme points and conjecture that, for every positive e, it has only finitely many extreme points [a, b] witha=e. With the extreme points ordered by the first coordinate, we identify the last two extreme points of everyAk, identify the last three extreme points ofA3, and describeA2 completely. A by-product of our arguments is a new algorithmic proof of Turán's theorem.
Independent systems of representatives in weighted graphs The following conjecture may have never been explicitly stated, but seems to have been floating around: if the vertex set of a graph with maximal degree Δ is partitioned into sets V i of size 2Δ, then there exists a coloring of the graph by 2Δ colors, where each color class meets each V i at precisely one vertex. We shall name it the strong 2Δ-colorability conjecture. We prove a fractional version of this conjecture. For this purpose, we prove a weighted generalization of a theorem of Haxell, on independent systems of representatives (ISR’s). En route, we give a survey of some recent developments in the theory of ISR’s.
Learning and classification of monotonic ordinal concepts
Proactive secret sharing or: How to cope with perpetual leakage Secret sharing schemes protect secrets by distributing them over different locations (share holders). In particular, in k out of n threshold schemes, security is assured if throughout the entire life-time of the secret the adversary is restricted to compromise less than k of the n locations. For long-lived and sensitive secrets this protection may be insufficient. We propose an efficient proactive secret sharing scheme, where shares are periodically renewed (without changing the secret) in such it way that information gained by the adversary in one time period is useless for attacking the secret after the shares are renewed. Hence, the adversary willing to learn the secret needs to break to all k locations during the same time period (e.g., one day, a week, etc.). Furthermore, in order to guarantee the availability and integrity of the secret, we provide mechanisms to detect maliciously (or accidentally) corrupted shares, as well as mechanisms to secretly recover the correct shares when modification is detected.
Criticality computation in parameterized statistical timing Chips manufactured in 90 nm technology have shown large parametric variations, and a worsening trend is predicted. These parametric variations make circuit optimization difficult since different paths are frequency-limiting in different parts of the multi-dimensional process space. Therefore, it is desirable to have a new diagnostic metric for robust circuit optimization. This paper presents a novel algorithm to compute the criticality probability of every edge in the timing graph of a design with linear complexity in the circuit size. Using industrial benchmarks, we verify the correctness of our criticality computation via Monte Carlo simulation. We also show that for large industrial designs with 442,000 gates, our algorithm computes all edge criticalities in less than 160 seconds
Mono-multi bipartite Ramsey numbers, designs, and matrices Eroh and Oellermann defined BRR(G1, G2) as the smallest N such that any edge coloring of the complete bipartite graph KN, N contains either a monochromatic G1 or a multicolored G2. We restate the problem of determining BRR(K1,λ, Kr,s) in matrix form and prove estimates and exact values for several choices of the parameters. Our general bound uses Füredi's result on fractional matchings of uniform hypergraphs and we show that it is sharp if certain block designs exist. We obtain two sharp results for the case r = s = 2: we prove BRR(K1,λ, K2,2) = 3λ - 2 and that the smallest n for which any edge coloring of Kλ,n contains either a monochromatic K1,λ or a multicolored K2,2 is λ2.
Hierarchical statistical characterization of mixed-signal circuits using behavioral modeling A methodology for hierarchical statistical circuit characterization which does not rely upon circuit-level Monte Carlo simulation is presented. The methodology uses principal component analysis, response surface methodology, and statistics to directly calculate the statistical distributions of higher-level parameters from the distributions of lower-level parameters. We have used the methodology to characterize a folded cascode operational amplifier and a phase-locked loop. This methodology permits the statistical characterization of large analog and mixed-signal systems, many of which are extremely time-consuming or impossible to characterize using existing methods.
Spectral Methods for Parameterized Matrix Equations. We apply polynomial approximation methods-known in the numerical PDEs context as spectral methods-to approximate the vector-valued function that satisfies a linear system of equations where the matrix and the right-hand side depend on a parameter. We derive both an interpolatory pseudospectral method and a residual-minimizing Galerkin method, and we show how each can be interpreted as solving a truncated infinite system of equations; the difference between the two methods lies in where the truncation occurs. Using classical theory, we derive asymptotic error estimates related to the region of analyticity of the solution, and we present a practical residual error estimate. We verify the results with two numerical examples.
On Fuzziness, Its Homeland and Its Neighbour
1.072767
0.066667
0.051081
0.051081
0.031688
0.006789
0.000025
0
0
0
0
0
0
0
Sparsity preserving projections with applications to face recognition Dimensionality reduction methods (DRs) have commonly been used as a principled way to understand the high-dimensional data such as face images. In this paper, we propose a new unsupervised DR method called sparsity preserving projections (SPP). Unlike many existing techniques such as local preserving projection (LPP) and neighborhood preserving embedding (NPE), where local neighborhood information is preserved during the DR procedure, SPP aims to preserve the sparse reconstructive relationship of the data, which is achieved by minimizing a L1 regularization-related objective function. The obtained projections are invariant to rotations, rescalings and translations of the data, and more importantly, they contain natural discriminating information even if no class labels are provided. Moreover, SPP chooses its neighborhood automatically and hence can be more conveniently used in practice compared to LPP and NPE. The feasibility and effectiveness of the proposed method is verified on three popular face databases (Yale, AR and Extended Yale B) with promising results.
Beyond sparsity: The role of L1-optimizer in pattern classification The newly-emerging sparse representation-based classifier (SRC) shows great potential for pattern classification but lacks theoretical justification. This paper gives an insight into SRC and seeks reasonable supports for its effectiveness. SRC uses L"1-optimizer instead of L"0-optimizer on account of computational convenience and efficiency. We re-examine the role of L"1-optimizer and find that for pattern recognition tasks, L"1-optimizer provides more classification meaningful information than L"0-optimizer does. L"0-optimizer can achieve sparsity only, whereas L"1-optimizer can achieve closeness as well as sparsity. Sparsity determines a small number of nonzero representation coefficients, while closeness makes the nonzero representation coefficients concentrate on the training samples with the same class label as the given test sample. Thus, it is closeness that guarantees the effectiveness of the L"1-optimizer based SRC. Based on the closeness prior, we further propose two kinds of class L"1-optimizer classifiers (CL"1C), the closeness rule based CL"1C (C-CL"1C) and its improved version: the Lasso rule based CL"1C (L-CL"1C). The proposed classifiers are evaluated on five databases and the experimental results demonstrate advantages of the proposed classifiers over SRC in classification performance and computational efficiency for large sample size problems.
Convergence of Fixed-Point Continuation Algorithms for Matrix Rank Minimization The matrix rank minimization problem has applications in many fields, such as system identification, optimal control, low-dimensional embedding, etc. As this problem is NP-hard in general, its convex relaxation, the nuclear norm minimization problem, is often solved instead. Recently, Ma, Goldfarb and Chen proposed a fixed-point continuation algorithm for solving the nuclear norm minimization problem (Math. Program., doi: 10.1007/s10107-009-0306-5, 2009). By incorporating an approximate singular value decomposition technique in this algorithm, the solution to the matrix rank minimization problem is usually obtained. In this paper, we study the convergence/recoverability properties of the fixed-point continuation algorithm and its variants for matrix rank minimization. Heuristics for determining the rank of the matrix when its true rank is not known are also proposed. Some of these algorithms are closely related to greedy algorithms in compressed sensing. Numerical results for these algorithms for solving affinely constrained matrix rank minimization problems are reported.
Sparse Representation for Computer Vision and Pattern Recognition Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact high-fidelity representation of the observed signal, but also to extract semantic information. The choice of dictionary plays a key role in bridging this gap: unconventional dictionaries consisting of, or learne...
Quantization of Sparse Representations Compressive sensing (CS) is a new signal acquisition technique for sparse and com- pressible signals. Rather than uniformly sampling the signal, CS computes inner products with randomized basis functions; the signal is then recovered by a convex optimization. Random CS measurements are universal in the sense that the same acquisition system is sufficient for signals sparse in any representation. This paper examines the quantization of strictly sparse, power-limited signals and concludes that CS with scalar quantization uses its allocated rate inefficiently.
Subspace pursuit for compressive sensing signal reconstruction We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques when applied to very sparse signals, and reconstruction accuracy of the same order as that of linear programming (LP) optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean-squared error of the reconstruction is upper-bounded by constant multiples of the measurement and signal perturbation energies.
Sparse representation for color image restoration. Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper.
Sets with type-2 operations The algebra of truth values of type-2 fuzzy sets consists of all mappings of the unit interval to itself, with type-2 operations that are convolutions of ordinary max and min operations. This paper is concerned with a special subalgebra of this truth value algebra, namely the set of nonzero functions with values in the two-element set {0,1}. This algebra can be identified with the set of all non-empty subsets of the unit interval, but the operations are not the usual union and intersection. We give simplified descriptions of the operations and derive the basic algebraic properties of this algebra, including the identification of its automorphism group. We also discuss some subalgebras and homomorphisms between them and look briefly at t-norms on this algebra of sets.
Fuzzy connection admission control for ATM networks based on possibility distribution of cell loss ratio This paper proposes a connection admission control (CAC) method for asynchronous transfer mode (ATM) networks based on the possibility distribution of cell loss ratio (CLR). The possibility distribution is estimated in a fuzzy inference scheme by using observed data of the CLR. This method makes possible secure CAC, thereby guaranteeing the allowed CLR. First, a fuzzy inference method is proposed, based on a weighted average of fuzzy sets, in order to estimate the possibility distribution of the CLR. In contrast to conventional methods, the proposed inference method can avoid estimating excessively large values of the CLR. Second, the learning algorithm is considered for tuning fuzzy rules for inference. In this, energy functions are derived so as to efficiently achieve higher multiplexing gain by applying them to CAC. Because the upper bound of the CLR can easily be obtained from the possibility distribution by using this algorithm, CAC can be performed guaranteeing the allowed CLR. The simulation studies show that the proposed method can well extract the upper bound of the CLR from the observed data. The proposed method also makes possible self-compensation in real time for the case where the estimated CLR is smaller than the observed CLR. It preserves the guarantee of the CLR as much as possible in operation of ATM switches. Third, a CAC method which uses the fuzzy inference mentioned above is proposed. In the area with no observed CLR data, fuzzy rules are automatically generated from the fuzzy rules already tuned by the learning algorithm with the existing observed CLR data. Such areas exist because of the absence of experience in connections. This method can guarantee the allowed CLR in the CAC and attains a high multiplex gain as is possible. The simulation studies show its feasibility. Finally, this paper concludes with some brief discussions
Incremental criticality and yield gradients Criticality and yield gradients are two crucial diagnostic metrics obtained from Statistical Static Timing Analysis (SSTA). They provide valuable information to guide timing optimization and timing-driven physical synthesis. Existing work in the literature, however, computes both metrics in a non-incremental manner, i.e., after one or more changes are made in a previously-timed circuit, both metrics need to be recomputed from scratch, which is obviously undesirable for optimizing large circuits. The major contribution of this paper is to propose two novel techniques to compute both criticality and yield gradients efficiently and incrementally. In addition, while node and edge criticalities are addressed in the literature, this paper for the first time describes a technique to compute path criticalities. To further improve algorithmic efficiency, this paper also proposes a novel technique to update "chip slack" incrementally. Numerical results show our methods to be over two orders of magnitude faster than previous work.
Fast image recovery using variable splitting and constrained optimization We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.
Induced uncertain linguistic OWA operators applied to group decision making The ordered weighted averaging (OWA) operator was developed by Yager [IEEE Trans. Syst., Man, Cybernet. 18 (1998) 183]. Later, Yager and Filev [IEEE Trans. Syst., Man, Cybernet.--Part B 29 (1999) 141] introduced a more general class of OWA operators called the induced ordered weighted averaging (IOWA) operators, which take as their argument pairs, called OWA pairs, in which one component is used to induce an ordering over the second components which are exact numerical values and then aggregated. The aim of this paper is to develop some induced uncertain linguistic OWA (IULOWA) operators, in which the second components are uncertain linguistic variables. Some desirable properties of the IULOWA operators are studied, and then, the IULOWA operators are applied to group decision making with uncertain linguistic information.
New Type-2 Rule Ranking Indices for Designing Parsimonious Interval Type-2 Fuzzy Logic Systems In this paper, we propose two novel indices for type-2 fuzzy rule ranking to identify the most influential fuzzy rules in designing type-2 fuzzy logic systems, and name them as R-values and c-values of fuzzy rules separately. The R-values of type-2 fuzzy rules are obtained by applying QR decomposition in which there is no need to estimate a rank as required in the SVD-QR with column pivoting algorithm. The c-values of type-2 fuzzy rules are suggested to rank rules based on the effects of rule consequents. Experimental results on a signal recovery problem have shown that by using the proposed indices the most influential type-2 fuzzy rules can be effectively selected to construct parsimonious type-2 fuzzy models while the system performances are kept at a satisfied level.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.025455
0.018182
0.009091
0.006727
0.002273
0.001119
0.000019
0
0
0
0
0
0
0
Adaptive learning of linguistic hierarchy in a multiple timescale recurrent neural network Recent research has revealed that hierarchical linguistic structures can emerge in a recurrent neural network with a sufficient number of delayed context layers. As a representative of this type of network the Multiple Timescale Recurrent Neural Network (MTRNN) has been proposed for recognising and generating known as well as unknown linguistic utterances. However the training of utterances performed in other approaches demands a high training effort. In this paper we propose a robust mechanism for adaptive learning rates and internal states to speed up the training process substantially. In addition we compare the generalisation of the network for the adaptive mechanism as well as the standard fixed learning rates finding at least equal capabilities.
Towards situated speech understanding: visual context priming of language models Fuse is a situated spoken language understanding system that uses visual context to steer the interpretation of speech. Given a visual scene and a spoken description, the system finds the object in the scene that best fits the meaning of the description. To solve this task, Fuse performs speech recognition and visually-grounded language understanding. Rather than treat these two problems separately, knowledge of the visual semantics of language and the specific contents of the visual scene are fused during speech processing. As a result, the system anticipates various ways a person might describe any object in the scene, and uses these predictions to bias the speech recognizer towards likely sequences of words. A dynamic visual attention mechanism is used to focus processing on likely objects within the scene as spoken utterances are processed. Visual attention and language prediction reinforce one another and converge on interpretations of incoming speech signals which are most consistent with visual context. In evaluations, the introduction of visual context into the speech recognition process results in significantly improved speech recognition and understanding accuracy. The underlying principles of this model may be applied to a wide range of speech understanding problems including mobile and assistive technologies in which contextual information can be sensed and semantically interpreted to bias processing.
Embodied Language Understanding with a Multiple Timescale Recurrent Neural Network How the human brain understands natural language and what we can learn for intelligent systems is open research. Recently, researchers claimed that language is embodied in most — if not all — sensory and sensorimotor modalities and that the brain's architecture favours the emergence of language. In this paper we investigate the characteristics of such an architecture and propose a model based on the Multiple Timescale Recurrent Neural Network, extended by embodied visual perception. We show that such an architecture can learn the meaning of utterances with respect to visual perception and that it can produce verbal utterances that correctly describe previously unknown scenes.
The hierarchical and functional connectivity of higher-order cognitive mechanisms: neurorobotic model to investigate the stability and flexibility of working memory. Higher-order cognitive mechanisms (HOCM), such as planning, cognitive branching, switching, etc., are known to be the outcomes of a unique neural organizations and dynamics between various regions of the frontal lobe. Although some recent anatomical and neuroimaging studies have shed light on the architecture underlying the formation of such mechanisms, the neural dynamics and the pathways in and between the frontal lobe to form and/or to tune the stability level of its working memory remain controversial. A model to clarify this aspect is therefore required. In this study, we propose a simple neurocomputational model that suggests the basic concept of how HOCM, including the cognitive branching and switching in particular, may mechanistically emerge from time-based neural interactions. The proposed model is constructed such that its functional and structural hierarchy mimics, to a certain degree, the biological hierarchy that is believed to exist between local regions in the frontal lobe. Thus, the hierarchy is attained not only by the force of the layout architecture of the neural connections but also through distinct types of neurons, each with different time properties. To validate the model, cognitive branching and switching tasks were simulated in a physical humanoid robot driven by the model. Results reveal that separation between the lower and the higher-level neurons in such a model is an essential factor to form an appropriate working memory to handle cognitive branching and switching. The analyses of the obtained result also illustrates that the breadth of this separation is important to determine the characteristics of the resulting memory, either static memory or dynamic memory. This work can be considered as a joint research between synthetic and empirical studies, which can open an alternative research area for better understanding of brain mechanisms.
Exploring Strategies for Training Deep Neural Networks Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization often appears to get stuck in poor solutions. Hinton et al. recently proposed a greedy layer-wise unsupervised learning procedure relying on the training algorithm of restricted Boltzmann machines (RBM) to initialize the parameters of a deep belief network (DBN), a generative model with many layers of hidden causal variables. This was followed by the proposal of another greedy layer-wise procedure, relying on the usage of autoassociator networks. In the context of the above optimization problem, we study these algorithms empirically to better understand their success. Our experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy helps the optimization by initializing weights in a region near a good local minimum, but also implicitly acts as a sort of regularization that brings better generalization and encourages internal distributed representations that are high-level abstractions of the input. We also present a series of experiments aimed at evaluating the link between the performance of deep neural networks and practical aspects of their topology, for example, demonstrating cases where the addition of more depth helps. Finally, we empirically explore simple variants of these training algorithms, such as the use of different RBM input unit distributions, a simple way of combining gradient estimators to improve performance, as well as on-line versions of those algorithms.
Anaphora for everyone: pronominal anaphora resoluation without a parser We present an algorithm for anaphora resolution which is a modified and extended version of that developed by (Lappin and Leass, 1994). In contrast to that work, our algorithm does not require in-depth, full, syntactic parsing of text. Instead, with minimal compromise in output quality, the modifications enable the resolution process to work from the output of a part of speech tagger, enriched only with annotations of grammatical function of lexical items in the input text stream. Evaluation of the results of our implementation demonstrates that accurate anaphora resolution can be realized within natural language processing frameworks which do not---or cannot--- employ robust and reliable parsing components.
Counter braids: a novel counter architecture for per-flow measurement Fine-grained network measurement requires routers and switches to update large arrays of counters at very high link speed (e.g. 40 Gbps). A naive algorithm needs an infeasible amount of SRAM to store both the counters and a flow-to-counter association rule, so that arriving packets can update corresponding counters at link speed. This has made accurate per-flow measurement complex and expensive, and motivated approximate methods that detect and measure only the large flows. This paper revisits the problem of accurate per-flow measurement. We present a counter architecture, called Counter Braids, inspired by sparse random graph codes. In a nutshell, Counter Braids "compresses while counting". It solves the central problems (counter space and flow-to-counter association) of per-flow measurement by "braiding" a hierarchy of counters with random graphs. Braiding results in drastic space reduction by sharing counters among flows; and using random graphs generated on-the-fly with hash functions avoids the storage of flow-to-counter association. The Counter Braids architecture is optimal (albeit with a complex decoder) as it achieves the maximum compression rate asymptotically. For implementation, we present a low-complexity message passing decoding algorithm, which can recover flow sizes with essentially zero error. Evaluation on Internet traces demonstrates that almost all flow sizes are recovered exactly with only a few bits of counter space per flow.
MapReduce: simplified data processing on large clusters MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.
Efficient approximation of random fields for numerical applications This article is dedicated to the rapid computation of separable expansions for the approximation of random fields. We consider approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. Especially, we provide an a posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples are provided to validate and quantify the presented methods. Copyright (c) 2015 John Wiley & Sons, Ltd.
Recognition of shapes by attributed skeletal graphs In this paper, we propose a framework to address the problem of generic 2-D shape recognition. The aim is mainly on using the potential strength of skeleton of discrete objects in computer vision and pattern recognition where features of objects are needed for classification. We propose to represent the medial axis characteristic points as an attributed skeletal graph to model the shape. The information about the object shape and its topology is totally embedded in them and this allows the comparison of different objects by graph matching algorithms. The experimental results demonstrate the correctness in detecting its characteristic points and in computing a more regular and effective representation for a perceptual indexing. The matching process, based on a revised graduated assignment algorithm, has produced encouraging results, showing the potential of the developed method in a variety of computer vision and pattern recognition domains. The results demonstrate its robustness in the presence of scale, reflection and rotation transformations and prove the ability to handle noise and occlusions.
Compressive sampling for streaming signals with sparse frequency content Compressive sampling (CS) has emerged as significant signal processing framework to acquire and reconstruct sparse signals at rates significantly below the Nyquist rate. However, most of the CS development to-date has focused on finite-length signals and representations. In this paper we discuss a streaming CS framework and greedy reconstruction algorithm, the Stream- ing Greedy Pursuit (SGP), to reconstruct signals with sparse frequency content. Our proposed sampling framework and the SGP are explicitly intended for streaming applications and signals of unknown length. The measurement framework we propose is designed to be causal and im- plementable using existing hardware architectures. Furthermore, our reconstruction algorithm provides specific computational guarantees, which makes it appropriate for real-time system im- plementations. Our experiment results on very long signals demonstrate the good performance of the SGP and validate our approach.
QoE Aware Service Delivery in Distributed Environment Service delivery and customer satisfaction are strongly related items for a correct commercial management platform. Technical aspects targeting this issue relate to QoS parameters that can be handled by the platform, at least partially. Subjective psychological issues and human cognitive aspects are typically unconsidered aspects and they directly determine the Quality of Experience (QoE). These factors finally have to be considered as key input for a successful business operation between a customer and a company. In our work, a multi-disciplinary approach is taken to propose a QoE interaction model based on the theoretical results from various fields including pyschology, cognitive sciences, sociology, service ecosystem and information technology. In this paper a QoE evaluator is described for assessing the service delivery in a distributed and integrated environment on per user and per service basis.
A model to perform knowledge-based temporal abstraction over multiple signals In this paper we propose the Multivariable Fuzzy Temporal Profile model (MFTP), which enables the projection of expert knowledge on a physical system over a computable description. This description may be used to perform automatic abstraction on a set of parameters that represent the temporal evolution of the system. This model is based on the constraint satisfaction problem (CSP)formalism, which enables an explicit representation of the knowledge, and on fuzzy set theory, from which it inherits the ability to model the imprecision and uncertainty that are characteristic of human knowledge vagueness. We also present an application of the MFTP model to the recognition of landmarks in mobile robotics, specifically to the detection of doors on ultrasound sensor signals from a Nomad 200 robot.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.072622
0.070333
0.070333
0.070333
0.017584
0
0
0
0
0
0
0
0
0
The Inherent Indistinguishability in Fuzzy Systems This paper provides an overview of fuzzy systems from the viewpoint of similarity relations. Similarity relations turn out to be an appealing framework in which typical concepts and techniques applied in fuzzy systems and fuzzy control can be better understood and interpreted. They can also be used to describe the indistinguishability inherent in any fuzzy system that cannot be avoided.
Fuzzy homomorphisms of algebras In this paper we consider fuzzy relations compatible with algebraic operations, which are called fuzzy relational morphisms. In particular, we aim our attention to those fuzzy relational morphisms which are uniform fuzzy relations, called uniform fuzzy relational morphisms, and those which are partially uniform F-functions, called fuzzy homomorphisms. Both uniform fuzzy relations and partially uniform F-functions were introduced in a recent paper by us. Uniform fuzzy relational morphisms are especially interesting because they can be conceived as fuzzy congruences which relate elements of two possibly different algebras. We give various characterizations and constructions of uniform fuzzy relational morphisms and fuzzy homomorphisms, we establish certain relationships between them and fuzzy congruences, and we prove homomorphism and isomorphism theorems concerning them. We also point to some applications of uniform fuzzy relational morphisms.
Fuzzy modifiers based on fuzzy relations In this paper we introduce a new type of fuzzy modifiers (i.e. mappings that transform a fuzzy set into a modified fuzzy set) based on fuzzy relations. We show how they can be applied for the representation of weakening adverbs (more or less, roughly) and intensifying adverbs (very, extremely) in the inclusive and the non-inclusive interpretation. We illustrate their use in an approximate reasoning scheme.
Towards a Logic for a Fuzzy Logic Controller Without Abstract
Similarity relations and fuzzy orderings. The notion of ''similarity'' as defined in this paper is essentially a generalization of the notion of equivalence. In the same vein, a fuzzy ordering is a generalization of the concept of ordering. For example, the relation x @? y (x is much larger than y) is a fuzzy linear ordering in the set of real numbers. More concretely, a similarity relation, S, is a fuzzy relation which is reflexive, symmetric, and transitive. Thus, let x, y be elements of a set X and @m"s(x,y) denote the grade of membership of the ordered pair (x,y) in S. Then S is a similarity relation in X if and only if, for all x, y, z in X, @m"s(x,x) = 1 (reflexivity), @m"s(x,y) = @m"s(y,x) (symmetry), and @m"s(x,z) = @? (@m"s(x,y) A @m"s(y,z)) (transitivity), where @? and A denote max and min, respectively. ^y A fuzzy ordering is a fuzzy relation which is transitive. In particular, a fuzzy partial ordering, P, is a fuzzy ordering which is reflexive and antisymmetric, that is, (@m"P(x,y) 0 and x y) @? @m"P(y,x) = 0. A fuzzy linear ordering is a fuzzy partial ordering in which x y @? @m"s(x,y) 0 or @m"s(y,x) 0. A fuzzy preordering is a fuzzy ordering which is reflexive. A fuzzy weak ordering is a fuzzy preordering in which x y @? @m"s(x,y) 0 or @m"s(y,x) 0. Various properties of similarity relations and fuzzy orderings are investigated and, as an illustration, an extended version of Szpilrajn's theorem is proved.
Artificial Paranoia
Processing fuzzy temporal knowledge L.A. Zadeh's (1975) possibility theory is used as a general framework for modeling temporal knowledge pervaded with imprecision or uncertainty. Ill-known dates, time intervals with fuzzy boundaries, fuzzy durations, and uncertain precedence relations between events can be dealt with in this approach. An explicit representation (in terms of possibility distributions) of the available information, which may be neither precise nor certain, is maintained. Deductive patterns of reasoning involving fuzzy and/or uncertain temporal knowledge are established, and the combination of fuzzy partial pieces of information is considered. A scheduled example with fuzzy temporal windows is discussed
On the capacity of MIMO broadcast channels with partial side information In multiple-antenna broadcast channels, unlike point-to-point multiple-antenna channels, the multiuser capacity depends heavily on whether the transmitter knows the channel coefficients to each user. For instance, in a Gaussian broadcast channel with M transmit antennas and n single-antenna users, the sum rate capacity scales like Mloglogn for large n if perfect channel state information (CSI) is available at the transmitter, yet only logarithmically with M if it is not. In systems with large n, obtaining full CSI from all users may not be feasible. Since lack of CSI does not lead to multiuser gains, it is therefore of interest to investigate transmission schemes that employ only partial CSI. We propose a scheme that constructs M random beams and that transmits information to the users with the highest signal-to-noise-plus-interference ratios (SINRs), which can be made available to the transmitter with very little feedback. For fixed M and n increasing, the throughput of our scheme scales as MloglognN, where N is the number of receive antennas of each user. This is precisely the same scaling obtained with perfect CSI using dirty paper coding. We furthermore show that a linear increase in throughput with M can be obtained provided that M does not not grow faster than logn. We also study the fairness of our scheduling in a heterogeneous network and show that, when M is large enough, the system becomes interference dominated and the probability of transmitting to any user converges to 1/n, irrespective of its path loss. In fact, using M=αlogn transmit antennas emerges as a desirable operating point, both in terms of providing linear scaling of the throughput with M as well as in guaranteeing fairness.
A 2-tuple fuzzy linguistic representation model for computing with words The fuzzy linguistic approach has been applied successfully to many problems. However, there is a limitation of this approach imposed by its information representation model and the computation methods used when fusion processes are performed on linguistic values. This limitation is the loss of information; this loss of information implies a lack of precision in the final results from the fusion of linguistic information. In this paper, we present tools for overcoming this limitation. The linguistic information is expressed by means of 2-tuples, which are composed of a linguistic term and a numeric value assessed in (-0.5, 0.5). This model allows a continuous representation of the linguistic information on its domain, therefore, it can represent any counting of information obtained in a aggregation process. We then develop a computational technique for computing with words without any loss of information. Finally, different classical aggregation operators are extended to deal with the 2-tuple linguistic model
Completeness and consistency conditions for learning fuzzy rules The completeness and consistency conditions were introduced in order to achieve acceptable concept recognition rules. In real problems, we can handle noise-affected examples and it is not always possible to maintain both conditions. Moreover, when we use fuzzy information there is a partial matching between examples and rules, therefore the consistency condition becomes a matter of degree. In this paper, a learning algorithm based on soft consistency and completeness conditions is proposed. This learning algorithm combines in a single process rule and feature selection and it is tested on different databases. (C) 1998 Elsevier Science B.V. All rights reserved.
Estimation of (near) low-rank matrices with noise and high-dimensional scaling We study an instance of high-dimensional inference in which the goal is to estimate a matrix circle minus* is an element of R-m1xm2 on the basis of N noisy observations. The unknown matrix circle minus* is assumed to be either exactly low rank, or "near" low-rank, meaning that it can be well-approximated by a matrix with low rank. We consider a standard M-estimator based on regularization by the nuclear or trace norm over matrices, and analyze its performance under high-dimensional scaling. We define the notion of restricted strong convexity (RSC) for the loss function, and use it to derive nonasymptotic bounds on the Frobenius norm error that hold for a general class of noisy observation models, and apply to both exactly low-rank and approximately low rank matrices. We then illustrate consequences of this general theory for a number of specific matrix models, including low-rank multivariate or multi-task regression, system identification in vector autoregressive processes and recovery of low-rank matrices from random projections. These results involve nonasymptotic random matrix theory to establish that the RSC condition holds, and to determine an appropriate choice of regularization parameter. Simulation results show excellent agreement with the high-dimensional scaling of the error predicted by our theory.
User impatience and network performance In this work, we analyze from passive measurements the correlations between the user-induced interruptions of TCP connections and different end-to-end performance metrics. The aim of this study is to assess the possibility for a network operator to take into account the customers' experience for network monitoring. We first observe that the usual connection-level performance metrics of the interrupted connections are not very different, and sometimes better than those of normal connections. However, the request-level performance metrics show stronger correlations between the interruption rates and the network quality-of-service. Furthermore, we show that the user impatience could also be used to characterize the relative sensitivity of data applications to various network performance metrics.
Fuzzy concepts and formal methods: some illustrative examples It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However, some system problems, particularly those drawn from the information systems (IS) problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper identifies the problem types of interest and argues that they are characterised by uncertainty and imprecision. It suggests fuzzy set theory as a useful formalism for modelling aspects of this imprecision. The paper illustrates how a fuzzy logic toolkit for Z can be applied to such problem domains. Several examples are presented illustrating the representation of imprecise concepts as fuzzy sets and relations, and soft pre-conditions and system requirements as a series of linguistically quantified propositions.
The laws of large numbers for fuzzy random variables The new attempt of weak and strong law of large numbers for fuzzy random variables is discussed in this paper by proposing the convergence in probability and convergence with probability one for fuzzy random variables. We first consider the limit properties of fuzzy numbers by invoking the Hausdorff metric, and then we extend it to the convergence in probability and convergence with probability one for fuzzy random variables. We provide the notion of weak and strong convergence in probability and weak and strong convergence with probability one for fuzzy random variables. Finally we come up with the weak and strong law of large numbers for fuzzy random variables in weak and strong sense. (C) 2000 Elsevier Science B.V. All rights reserved.
1.249984
0.249984
0.049997
0.001999
0.000027
0.000001
0
0
0
0
0
0
0
0
Practical, fast Monte Carlo statistical static timing analysis: why and how Statistical static timing analysis (SSTA) has emerged as an essential tool for nanoscale designs. Monte Carlo methods are universally employed to validate the accuracy of the approximations made in all SSTA tools, but Monte Carlo itself is never employed as a strategy for practical SSTA. It is widely believed to be "too slow" -- despite an uncomfortable lack of rigorous studies to support this belief. We offer the first large-scale study to refute this belief. We synthesize recent results from fast quasi-Monte Carlo (QMC) deterministic sampling and efficient Karhunen-Loéve expansion (KLE) models of spatial correlation to show that Monte Carlo SSTA need not be slow. Indeed, we show for the ISCAS89 circuits, a few hundred, well-chosen sample points can achieve errors within 5%, with no assumptions on gate models, wire models, or the core STA engine, with runtimes less than 90 s.
Measurement and characterization of pattern dependent process variations of interconnect resistance, capacitance and inductance in nanometer technologies Process variations have become a serious concern for nanometer technologies. The interconnect and device variations include inter-and intra-die variations of geometries, as well as process and electrical parameters. In this paper, pattern (i.e. density, width and space) dependent interconnect thickness and width variations are studied based on a well-designed test chip in a 90 nm technology. The parasitic resistance and capacitance variations due to the process variations are investigated, and process-variation-aware extraction techniques are proposed. In the test chip, electrical and physical measurements show strong metal thickness and width variations mainly due to chemical mechanical polishing (CMP) in nanometer technologies. The loop inductance dependence of return patterns is also validated in the test chip. The proposed new characterization methods extract interconnect RC variations as a function of metal density, width and space. Simulation results show excellent agreement between on-wafer measurements and extractions of various RC structures, including a set of metal loaded/unloaded ring oscillators in a complex wiring environment.
A divide-and-conquer algorithm for 3-D capacitance extraction We present a divide-and-conquer algorithm to improve the three-dimensional (3-D) boundary element method (BEM) for capacitance extraction. We divide large interconnect structures into small sections, set new boundary conditions using the border for each section, solve each section, and then combine the results to derive the capacitance. The target application is critical nets, clock trees, or packages where 3-D accuracy is required. Our algorithm is a significant improvement over the traditional BEMs and their enhancements, such as the "window" method, where conductors far away are dropped, and the "shield" method where conductors hidden behind other conductors are dropped. Experimental results show that our algorithm is a magnitude faster than the traditional BEM and the window+shield method, for medium to large structures. The error of the capacitance computed by the new algorithm is within 2% for self capacitance and 7% for coupling capacitance, compared with the results obtained by solving the entire system using BEM. Furthermore, our algorithms gives accurate distributed RC, where none of the previous 3-D BEM algorithms and their enhancements can.
A Fast Algorithm To Compute Irreducible and Primitive Polynomials in Finite Fields In this paper we present a method to computeall the irreducible and primitive polynomials of degreem over the finite fieldGF(q). Our method finds each new irreducible or primitive polynomial with a complexity ofO(m) arithmetic operations inGF(q). The best previously known methods [3], [10] use the Berlekamp-Massey algorithm [7] and they have a complexityO(m2). We reach mis improvement taking into account a systolic implementation [2] of the extended Euclidean algorithm instead of using the Berlekamp-Massey algorithm.
Twisted GFSR generators II The twisted GFSR generators proposed in a previous article have a defect in k-distribution for k larger than the order of recurrence. In this follow up article, we introduce and analyze a new TGFSR variant having better k-distribution property. We provide an efficient algorithm to obtain the order of equidistribution, together with a tight upper bound on the order. We discuss a method to search for generators attaining this bound, and we list some of these such generators. The upper bound turns out to be (sometimes far) less than the maximum order of equidistribution for a generator of that period length, but far more than that for a GFSR with a working are of the same size.
Principle Hessian direction based parameter reduction with process variation As CMOS technology enters the nanometer regime, the increasing process variation is bringing manifest impact on circuit performance. In this paper, we propose a Principle Hessian Direction (PHD) based parameter reduction approach. This new approach relies on the impact of each parameter on circuit performance to decide whether keeping or reducing the parameter. Compared with the existing principle component analysis (PCA) method, this performance based property provides us a significantly smaller set of parameters after reduction. The experimental results also support our conclusions. In all cases, an average of 53% of reduction is observed with less than 3% error in the mean value and less than 8% error in the variation.
A Study of Variance Reduction Techniques for Estimating Circuit Yields The efficiency of several variance reduction techniques (in particular, importance sampling, stratified sampling, and control variates) are studied with respect to their application in estimating circuit yields. This study suggests that one essentially has to have a good approximation of the region of acceptability in order to achieve significant variance reduction. Further, all the methods considered are based, either explicitly or implicity, on the use of a model. The control variate method appears to be more practical for implementation in a general purpose statistical circuit analysis program. Stratified sampling is the most simple to implement, but yields only very modest reductions in the variance of the yield estimator. Lastly, importance sampling is very useful when there are few parameters and the yield is very high or very low; however, a good practical technique for its implementation, in general, has not been found.
Phase Noise and Noise Induced Frequency Shift in Stochastic Nonlinear Oscillators Phase noise plays an important role in the performances of electronic oscillators. Traditional approaches describe the phase noise problem as a purely diffusive process. In this paper we develop a novel phase model reduction technique for phase noise analysis of nonlinear oscillators subject to stochastic inputs. We obtain analytical equations for both the phase deviation and the probability density function of the phase deviation. We show that, in general, the phase reduced models include non Markovian terms. Under the Markovian assumption, we demonstrate that the effect of white noise is to generate both phase diffusion and a frequency shift, i.e. phase noise is best described as a convection-diffusion process. The analysis of a solvable model shows the accuracy of our theory, and that it gives better predictions than traditional phase models.
Polynomial chaos for simulating random volatilities In financial mathematics, the fair price of options can be achieved by solutions of parabolic differential equations. The volatility usually enters the model as a constant parameter. However, since this constant has to be estimated with respect to the underlying market, it makes sense to replace the volatility by an according random variable. Consequently, a differential equation with stochastic input occurs, whose solution determines the fair price in the refined model. Corresponding expected values and variances can be computed approximately via a Monte Carlo method. Alternatively, the generalised polynomial chaos yields an efficient approach for calculating the required data. Based on a parabolic equation modelling the fair price of Asian options, the technique is developed and corresponding numerical simulations are presented.
Fast and Accurate DPPM Computation Using Model Based Filtering Defective Parts Per Million (DPPM) is an important quality metric that indicates the ratio of defective devices shipped to the customers. It is necessary to estimate and minimize DPPM in order to meet the desired level of quality. However, DPPM estimation requires statistical simulations, which are computationally costly if traditional methods are used. In this work, we propose an efficient DPPM estimation method for analog circuits that greatly reduces the computational burden. We employ a model based approach to selectively simulate only consequential samples in DPPM estimation. We include methods to mitigate the effect of model imperfection and robust model fitting to guarantee a consistent and efficient estimation. Experimental results show that the proposed method achieves 10xto 25x reduction in the number of simulations for an RF receiver front-end circuit.
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques, and reconstruc- tion accuracy of the same order as that of LP optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean squared error of the reconstruction is upper bounded by constant multiples of the measurement and signal perturbation energies.
Block compressed sensing of images using directional transforms Block-based random image sampling is coupled with a projection-driven compressed-sensing recovery that encourages sparsity in the domain of directional transforms simultaneously with a smooth reconstructed image. Both contourlets as well as complex-valued dual-tree wavelets are considered for their highly directional representation, while bivariate shrinkage is adapted to their multiscale decomposition structure to provide the requisite sparsity constraint. Smoothing is achieved via a Wiener filter incorporated into iterative projected Landweber compressed-sensing recovery, yielding fast reconstruction. The proposed approach yields images with quality that matches or exceeds that produced by a popular, yet computationally expensive, technique which minimizes total variation. Additionally, reconstruction quality is substantially superior to that from several prominent pursuits-based algorithms that do not include any smoothing.
Parallel Opportunistic Routing in Wireless Networks We study benefits of opportunistic routing in a large wireless ad hoc network by examining how the power, delay, and total throughput scale as the number of source–destination pairs increases up to the operating maximum. Our opportunistic routing is novel in a sense that it is massively parallel, i.e., it is performed by many nodes simultaneously to maximize the opportunistic gain while controlling the interuser interference. The scaling behavior of conventional multihop transmission that does not employ opportunistic routing is also examined for comparison. Our main results indicate that our opportunistic routing can exhibit a net improvement in overall power–delay tradeoff over the conventional routing by providing up to a logarithmic boost in the scaling law. Such a gain is possible since the receivers can tolerate more interference due to the increased received signal power provided by the multi user diversity gain, which means that having more simultaneous transmissions is possible.
A game-theoretic multipath routing for video-streaming services over Mobile Ad Hoc Networks The number of portable devices capable of maintaining wireless communications has increased considerably in the last decade. Such mobile nodes may form a spontaneous self-configured network connected by wireless links to constitute a Mobile Ad Hoc Network (MANET). As the number of mobile end users grows the demand of multimedia services, such as video-streaming, in such networks is envisioned to increase as well. One of the most appropriate video coding technique for MANETs is layered MPEG-2 VBR, which used with a proper multipath routing scheme improves the distribution of video streams. In this article we introduce a proposal called g-MMDSR (game theoretic-Multipath Multimedia Dynamic Source Routing), a cross-layer multipath routing protocol which includes a game theoretic approach to achieve a dynamic selection of the forwarding paths. The proposal seeks to improve the own benefits of the users whilst using the common scarce resources efficiently. It takes into account the importance of the video frames in the decoding process, which outperforms the quality of the received video. Our scheme has proved to enhance the performance of the framework and the experience of the end users. Simulations have been carried out to show the benefits of our proposal under different situations where high interfering traffic and mobility of the nodes are present.
1.027706
0.030661
0.030661
0.022547
0.011283
0.007685
0.002817
0.000277
0.000117
0.00002
0
0
0
0
Noise Reduction Through Compressed Sensing We present an exemplar-based method for noise reduction using missing data imputation: A noise-corrupted word is sparsely represented in an over-complete basis of exemplar (clean) speech signals using only the uncorrupted time-frequency elements of the word. Prior to recognition the parts of the spectrogram dominated by noise are replaced by clean speech estimates obtained by projecting the sparse representation in the basis. Since at low SNRs individual frames may contain few, if any, uncorrupted coefficients, the method tries to exploit all reliable information that is available in a word-length time window. We study the effectiveness of this approach on the Interspeech 2008 Consonant Challenge (VCV) data as well as on AURORA-2 data. Using oracle masks, we obtain obtain accuracies of 36-44% on the VCV data. On AURORA-2 we obtain an accuracy of 91% at SNR -5 dB, compared to 61% using a conventional frame-based approach, clearly illustrating the great potential of the method.
TR01: Time-continuous Sparse Imputation An effective way to increase the noise robustness of automatic speech recognition is to label noisy speech features as either reliable or unrel iable (missing) prior to decoding, and to replace the missing ones by clean speech estimates. We present a novel method to obtain such clean speech estimates. Unlike previous imputation frameworks which work on a frame-by-frame basis, our method focuses on ex- ploiting information from a large time-context. Using a sliding window approach, denoised speech representations are constructed using a sparse representation of the reliable features in an overcomplete basis of fixed-leng th exemplar fragments. We demonstrate the potential of our approach with experiments on the AURORA-2 connected digit database. In (6), we showed that this data scarcity problem at very low SNRs can be solved by a missing data imputation method that uses a time window which is (much) wider than a single frame. This allows a better exploitation of the redundancy of the speech signal. The technique, sparse imputation, works by finding a sparse representation of the reliable features o f an unknown word in an overcomplete basis of noise-free example words. The projection of these sparse representations in the basis is then used to provide clean speech estimates to replace the unreliable features. Since the imputation framework introduced in (6) represents each word by a fixed-l ength vector, its applicability is limited to situations where the word boundaries are known beforehand, such as in isolated word recognition. In the current paper we extend sparse imputation for use in continuous speech recognition. Rather than imputing whole words using a basis of exemplar words, we impute fixed-length sliding time
Sparse imputation for noise robust speech recognition using soft masks In previous work we introduced a new missing data imputation method for ASR, dubbed sparse imputation. We showed that the method is capable of maintaining good recognition accuracies even at very low SNRs provided the number of mask estimation errors is sufficiently low. Especially at low SNRs, however, mask estimation is difficult and errors are unavoidable. In this paper, we try to reduce the impact of mask estimation errors by making soft decisions, i.e., estimating the probability that a feature is reliable. Using an isolated digit recognition task (using the AURORA-2 database), we demonstrate that using soft masks in our sparse imputation approach yields a substantial increase in recognition accuracy, most notably at low SNRs.
Reconstruction of missing features for robust speech recognition Speech recognition systems perform poorly in the presence of corrupting noise. Missing feature methods attempt to compensate for the noise by removing noise corrupted components of spectrographic representations of noisy speech and performing recognition with the remaining reliable components. Conventional classifier-compensation methods modify the recognition system to work with the incomplete representations so obtained. This constrains them to perform recognition using spectrographic features which are known to be less optimal than cepstra. In this paper we present two missing-feature algorithms that reconstruct complete spectrograms from incomplete noisy ones. Cepstral vectors can now be derived from the reconstructed spectrograms for recognition. The first algorithm uses MAP procedures to estimate corrupt components from their correlations with reliable components. The second algorithm clusters spectral vectors of clean speech. Corrupt components of noisy speech are estimated from the distribution of the cluster that the analysis frame is identified with. Experiments show that, although conventional classifier-compensation methods are superior when recognition is performed with spectrographic features, cepstra derived from the reconstructed spectrograms result in better recognition performance overall. The proposed methods are also less expensive computationally and do not require modification of the recognizer.
Compressed Sensing and Redundant Dictionaries This paper extends the concept of compressed sensing to signals that are not sparse in an orthonormal basis but rather in a redundant dictionary. It is shown that a matrix, which is a composition of a random matrix of certain type and a deterministic dictionary, has small restricted isometry constants. Thus, signals that are sparse with respect to the dictionary can be recovered via basis pursuit (BP) from a small number of random measurements. Further, thresholding is investigated as recovery algorithm for compressed sensing, and conditions are provided that guarantee reconstruction with high probability. The different schemes are compared by numerical experiments.
Stable recovery of sparse overcomplete representations in the presence of noise Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimal-sparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.
First-order incremental block-based statistical timing analysis Variability in digital integrated circuits makes timing verification an extremely challenging task. In this paper, a canonical first order delay model is proposed that takes into account both correlated and independent randomness. A novel linear-time block-based statistical timing algorithm is employed to propagate timing quantities like arrival times and required arrival times through the timing graph in this canonical form. At the end of the statistical timing, the sensitivities of all timing quantities to each of the sources of variation are available. Excessive sensitivities can then be targeted by manual or automatic optimization methods to improve the robustness of the design. This paper also reports the first incremental statistical timer in the literature which is suitable for use in the inner loop of physical synthesis or other optimization programs. The third novel contribution of this paper is the computation of local and global criticality probabilities. For a very small cost in CPU time, the probability of each edge or node of the timing graph being critical is computed. Numerical results are presented on industrial ASIC chips with over two million logic gates.
Tensor Decompositions and Applications This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
A model based on linguistic 2-tuples for dealing with multigranular hierarchical linguistic contexts in multi-expert decision-making In those problems that deal with multiple sources of linguistic information we can find problems defined in contexts where the linguistic assessments are assessed in linguistic term sets with different granularity of uncertainty and/or semantics (multigranular linguistic contexts). Different approaches have been developed to manage this type of contexts, that unify the multigranular linguistic information in an unique linguistic term set for an easy management of the information. This normalization process can produce a loss of information and hence a lack of precision in the final results. In this paper, we shall present a type of multigranular linguistic contexts we shall call linguistic hierarchies term sets, such that, when we deal with multigranular linguistic information assessed in these structures we can unify the information assessed in them without loss of information. To do so, we shall use the 2-tuple linguistic representation model. Afterwards we shall develop a linguistic decision model dealing with multigranular linguistic contexts and apply it to a multi-expert decision-making problem
Proceedings of the 41th Design Automation Conference, DAC 2004, San Diego, CA, USA, June 7-11, 2004
Robust Regression and Lasso Lasso, or l1 regularized least squares, has been explored extensively for its remarkable sparsity properties. In this paper it is shown that the solution to Lasso, in addition to its sparsity, has robustness properties: it is the solution to a robust optimization problem. This has two important consequences. First, robustness provides a connection of the regularizer to a physical property, namely, protection from noise. This allows a principled selection of the regularizer, and in particular, generalizations of Lasso that also yield convex optimization problems are obtained by considering different uncertainty sets. Second, robustness can itself be used as an avenue for exploring different properties of the solution. In particular, it is shown that robustness of the solution explains why the solution is sparse. The analysis as well as the specific results obtained differ from standard sparsity results, providing different geometric intuition. Furthermore, it is shown that the robust optimization formulation is related to kernel density estimation, and based on this approach, a proof that Lasso is consistent is given, using robustness directly. Finally, a theorem is proved which states that sparsity and algorithmic stability contradict each other, and hence Lasso is not stable.
Practical RDF schema reasoning with annotated semantic web data Semantic Web data with annotations is becoming available, being YAGO knowledge base a prominent example. In this paper we present an approach to perform the closure of large RDF Schema annotated semantic web data using standard database technology. In particular, we exploit several alternatives to address the problem of computing transitive closure with real fuzzy semantic data extracted from YAGO in the PostgreSQL database management system. We benchmark the several alternatives and compare to classical RDF Schema reasoning, providing the first implementation of annotated RDF schema in persistent storage.
Fuzzy concepts and formal methods: some illustrative examples It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However, some system problems, particularly those drawn from the information systems (IS) problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper identifies the problem types of interest and argues that they are characterised by uncertainty and imprecision. It suggests fuzzy set theory as a useful formalism for modelling aspects of this imprecision. The paper illustrates how a fuzzy logic toolkit for Z can be applied to such problem domains. Several examples are presented illustrating the representation of imprecise concepts as fuzzy sets and relations, and soft pre-conditions and system requirements as a series of linguistically quantified propositions.
Subjective Quality Metric For 3d Video Services Three-dimensional (3D) video service is expected to be introduced as a next-generation television service. Stereoscopic video is composed of two 2D video signals for the left and right views, and these 2D video signals are encoded. Video quality between the left and right views is not always consistent because, for example, each view is encoded at a different bit rate. As a result, the video quality difference between the left and right views degrades the quality of stereoscopic video. However, these characteristics have not been thoroughly studied or modeled. Therefore, it is necessary to better understand how the video quality difference affects stereoscopic video quality and to model the video quality characteristics. To do that, we conducted subjective quality assessments to derive subjective video quality characteristics. The characteristics showed that 3D video quality was affected by the difference in video quality between the left and right views, and that when the difference was small, 3D video quality correlated with the highest 2D video quality of the two views. We modeled these characteristics as a subjective quality metric using a training data set. Finally, we verified the performance of our proposed model by applying it to unknown data sets.
1.105
0.11
0.05
0.02
0.000061
0.000003
0
0
0
0
0
0
0
0
Application of level soft sets in decision making based on interval-valued fuzzy soft sets Molodtsov's soft set theory was originally proposed as a general mathematical tool for dealing with uncertainty. Research on (fuzzy) soft set based decision making has received much attention in recent years. This paper aims to give deeper insights into decision making involving interval-valued fuzzy soft sets, a hybrid model combining soft sets with interval-valued fuzzy sets. The concept called reduct fuzzy soft sets of interval-valued fuzzy soft sets is introduced. Using reduct fuzzy soft sets and level soft sets, flexible schemes for decision making based on (weighted) interval-valued fuzzy soft sets are proposed, and some illustrative examples are employed to show that the proposals presented here are not only more reasonable but more efficient in practical applications.
Interval type-2 fuzzy neural network control for X-Y-Theta motion control stage using linear ultrasonic motors An interval type-2 fuzzy neural network (IT2FNN) control system is proposed to control the position of an X-Y-Theta (X-Y-@q) motion control stage using linear ultrasonic motors (LUSMs) to track various contours. The IT2FNN, which combines the merits of interval type-2 fuzzy logic system (FLS) and neural network, is developed to simplify the computation and to confront the uncertainties of the X-Y-@q motion control stage. Moreover, the parameter learning of the IT2FNN based on the supervised gradient descent method is performed on line. The experimental results show that the tracking performance of the IT2FNN is significantly improved compared to type-1 FNN.
Xor-Implications and E-Implications: Classes of Fuzzy Implications Based on Fuzzy Xor The main contribution of this paper is to introduce an autonomous definition of the connective ''fuzzy exclusive or'' (fuzzy Xor, for short), which is independent from others connectives. Also, two canonical definitions of the connective Xor are obtained from the composition of fuzzy connectives, and based on the commutative and associative properties related to the notions of triangular norms, triangular conorms and fuzzy negations. We show that the main properties of the classical connective Xor are preserved by the connective fuzzy Xor, and, therefore, this new definition of the connective fuzzy Xor extends the related classical approach. The definitions of fuzzy Xor-implications and fuzzy E-implications, induced by the fuzzy Xor connective, are also studied, and their main properties are analyzed. The relationships between the fuzzy Xor-implications and the fuzzy E-implications with automorphisms are explored.
Systematic image processing for diagnosing brain tumors: A Type-II fuzzy expert system approach This paper presents a systematic Type-II fuzzy expert system for diagnosing the human brain tumors (Astrocytoma tumors) using T"1-weighted Magnetic Resonance Images with contrast. The proposed Type-II fuzzy image processing method has four distinct modules: Pre-processing, Segmentation, Feature Extraction, and Approximate Reasoning. We develop a fuzzy rule base by aggregating the existing filtering methods for Pre-processing step. For Segmentation step, we extend the Possibilistic C-Mean (PCM) method by using the Type-II fuzzy concepts, Mahalanobis distance, and Kwon validity index. Feature Extraction is done by Thresholding method. Finally, we develop a Type-II Approximate Reasoning method to recognize the tumor grade in brain MRI. The proposed Type-II expert system has been tested and validated to show its accuracy in the real world. The results show that the proposed system is superior in recognizing the brain tumor and its grade than Type-I fuzzy expert systems.
Interval-valued Fuzzy Sets, Possibility Theory and Imprecise Probability Interval-valued fuzzy sets were proposed thirty years ago as a natural extension of fuzzy sets. Many variants of these mathematical objects ex- ist, under various names. One popular variant proposed by Atanassov starts by the specification of membership and non-membership functions. This paper focuses on interpretations of such ex- tensions of fuzzy sets, whereby the two member- ship functions that define them can be justified in the scope of some information representation paradigm. It particularly focuses on a recent pro- posal by Neumaier, who proposes to use interval- valued fuzzy sets under the name "clouds", as an e! cient method to represent a family of proba- bilities. We show the connection between clouds, interval-valued fuzzy sets and possibility theory.
Multiattribute decision making based on interval-valued intuitionistic fuzzy values In this paper, we present a new multiattribute decision making method based on the proposed interval-valued intuitionistic fuzzy weighted average operator and the proposed fuzzy ranking method for intuitionistic fuzzy values. First, we briefly review the concepts of interval-valued intuitionistic fuzzy sets and the Karnik-Mendel algorithms. Then, we propose the intuitionistic fuzzy weighted average operator and interval-valued intuitionistic fuzzy weighted average operator, based on the traditional weighted average method and the Karnik-Mendel algorithms. Then, we propose a fuzzy ranking method for intuitionistic fuzzy values based on likelihood-based comparison relations between intervals. Finally, we present a new multiattribute decision making method based on the proposed interval-valued intuitionistic fuzzy weighted average operator and the proposed fuzzy ranking method for intuitionistic fuzzy values. The proposed method provides us with a useful way for multiattribute decision making based on interval-valued intuitionistic fuzzy values.
Extension principles for interval-valued intuitionistic fuzzy sets and algebraic operations The Atanassov's intuitionistic fuzzy (IF) set theory has become a popular topic of investigation in the fuzzy set community. However, there is less investigation on the representation of level sets and extension principles for interval-valued intuitionistic fuzzy (IVIF) sets as well as algebraic operations. In this paper, firstly the representation theorem of IVIF sets is proposed by using the concept of level sets. Then, the extension principles of IVIF sets are developed based on the representation theorem. Finally, the addition, subtraction, multiplication and division operations over IVIF sets are defined based on the extension principle. The representation theorem and extension principles as well as algebraic operations form an important part of Atanassov's IF set theory.
Sensed Signal Strength Forecasting for Wireless Sensors Using Interval Type-2 Fuzzy Logic System. In this paper, we present a new approach for sensed signal strength forecasting in wireless sensors using interval type-2 fuzzy logic system (FLS). We show that a type-2 fuzzy membership function, i.e., a Gaussian MF with uncertain mean is most appropriate to model the sensed signal strength of wireless sensors. We demonstrate that the sensed signals of wireless sensors are self-similar, which means it can be forecasted. An interval type-2 FLS is designed for sensed signal forecasting and is compared against a type-1 FLS. Simulation results show that the interval type-2 FLS performs much better than the type-1 FLS in sensed signal forecasting. This application can be further used for power on/off control in wireless sensors to save battery energy.
Concept Representation and Database Structures in Fuzzy Social Relational Networks We discuss the idea of fuzzy relationships and their role in modeling weighted social relational networks. The paradigm of computing with words is introduced, and the role that fuzzy sets play in representing linguistic concepts is described. We discuss how these technologies can provide a bridge between a network analyst's linguistic description of social network concepts and the formal model of the network. We then turn to some examples of taking an analyst's network concepts and formally representing them in terms of network properties. We first do this for the concept of clique and then for the idea of node importance. Finally, we introduce the idea of vector-valued nodes and begin developing a technology of social network database theory.
Uncertain Linguistic Hybrid Geometric Mean Operator And Its Application To Group Decision Making Under Uncertain Linguistic Environment In this paper, we propose an uncertain linguistic hybrid geometric mean (ULHGM) operator, which is based on the uncertain linguistic weighted geometric mean (ULWGM) operator and the uncertain linguistic ordered weighted geometric (ULOWG) operator proposed by Xu [Z.S. Xu, "An approach based on the uncertain LOWG and induced uncertain LOWG operators to group decision making with uncertain multiplicative linguistic preference relations", Decision Support Systems 41 (2006) 488-499] and study some desirable properties of the ULHGM operator. We have proved both ULWGM and ULOWG operators are the special case of the ULHGM operator. The ULHGM operator generalizes both the ULWGM and ULOWG operators, and reflects the importance degrees of both the given arguments and their ordered positions. Based on the ULWGM and ULHGM operators, we propose a practical method for multiple attribute group decision making with uncertain linguistic preference relations. Finally, an illustrative example demonstrates the practicality and effectiveness of the proposed method.
A new fuzzy connectivity class application to structural recognition in images Fuzzy sets theory constitutes a poweful tool, that can lead to more robustness in problems such as image segmentation and recognition. This robustness results to some extent from the partial recovery of the continuity that is lost during digitization. Here we deal with fuzzy connectivity notions. We show that usual fuzzy connectivity definitions have some drawbacks, and we propose a new definition, based on the notion of hyperconnection, that exhibits better properties, in particular in terms of continuity. We illustrate the potential use of this definition in a recognition procedure based on connected filters. A max-tree representation is also used, in order to deal efficiently with the proposed connectivity.
Implications of buyer decision theory for design of e-commerce websites In the rush to open their website, e-commerce sites too often fail to support buyer decision making and search, resulting in a loss of sale and the customer's repeat business. This paper reviews why this occurs and the failure of many B2C and B2B website executives to understand that appropriate decision support and search technology can't be fully bought off-the-shelf. Our contention is that significant investment and effort is required at any given website in order to create the decision support and search agents needed to properly support buyer decision making. We provide a framework to guide such effort (derived from buyer behavior choice theory); review the open problems that e-catalog sites pose to the framework and to existing search engine technology; discuss underlying design principles and guidelines; validate the framework and guidelines with a case study; and discuss lessons learned and steps needed to better support buyer decision behavior in the future. Future needs are also pinpointed.
Highly connected monochromatic subgraphs We conjecture that for n>4(k-1) every 2-coloring of the edges of the complete graph K"n contains a k-connected monochromatic subgraph with at least n-2(k-1) vertices. This conjecture, if true, is best possible. Here we prove it for k=2, and show how to reduce it to the case n<7k-6. We prove the following result as well: for n>16k every 2-colored K"n contains a k-connected monochromatic subgraph with at least n-12k vertices.
3D visual experience oriented cross-layer optimized scalable texture plus depth based 3D video streaming over wireless networks. •A 3D experience oriented 3D video cross-layer optimization method is proposed.•Networking-related 3D visual experience model for 3D video streaming is presented.•3D video characteristics are fully considered in the cross-layer optimization.•MAC layer channel allocation and physical layer MCS are systematically optimized.•Results show that our method obtains superior 3D visual experience to others.
1.103436
0.106872
0.05419
0.035624
0.014485
0.006894
0.001724
0.000068
0.000012
0.000003
0
0
0
0
Stable Reduced Models for Nonlinear Descriptor Systems Through Piecewise-Linear Approximation and Projection This paper presents theoretical and practical results concerning the stability of piecewise-linear (PWL) reduced models for the purposes of analog macromodeling. Results include proofs of input-output (I/O) stability for PWL approximations to certain classes of nonlinear descriptor systems, along with projection techniques that are guaranteed to preserve I/O stability in reduced-order PWL models. We also derive a new PWL formulation and introduce a new nonlinear projection, allowing us to extend our stability results to a broader class of nonlinear systems described by models containing nonlinear descriptor functions. Lastly, we present algorithms to compute efficiently the required stabilizing nonlinear left-projection matrix operators.
Phase Noise and Noise Induced Frequency Shift in Stochastic Nonlinear Oscillators Phase noise plays an important role in the performances of electronic oscillators. Traditional approaches describe the phase noise problem as a purely diffusive process. In this paper we develop a novel phase model reduction technique for phase noise analysis of nonlinear oscillators subject to stochastic inputs. We obtain analytical equations for both the phase deviation and the probability density function of the phase deviation. We show that, in general, the phase reduced models include non Markovian terms. Under the Markovian assumption, we demonstrate that the effect of white noise is to generate both phase diffusion and a frequency shift, i.e. phase noise is best described as a convection-diffusion process. The analysis of a solvable model shows the accuracy of our theory, and that it gives better predictions than traditional phase models.
Macromodel Generation for BioMEMS Components Using a Stabilized Balanced Truncation Plus Trajectory Piecewise-Linear Approach In this paper, we present a technique for automatically extracting nonlinear macromodels of biomedical microelectromechanical systems devices from physical simulation. The technique is a modification of the recently developed trajectory piecewise-linear approach, but uses ideas from balanced truncation to produce much lower order and more accurate models. The key result is a perturbation analysis of an instability problem with the reduction algorithm, and a simple modification that makes the algorithm more robust. Results are presented from examples to demonstrate dramatic improvements in reduced model accuracy and show the limitations of the method.
Identification of PARAFAC-Volterra cubic models using an Alternating Recursive Least Squares algorithm A broad class of nonlinear systems can be modelled by the Volterra series representation. However, its practical use in nonlinear system identification is sometimes limited due to the large number of parameters associated with the Volterra filters structure. This paper is concerned with the problem of identification of third-order Volterra kernels. A tensorial decomposition called PARAFAC is used to represent such a kernel. A new algorithm called the Alternating Recursive Least Squares (ARLS) algorithm is applied to identify this decomposition for estimating the Volterra kernels of cubic systems. This method significantly reduces the computational complexity of Volterra kernel estimation. Simulation results show the ability of the proposed method to achieve a good identification and an important complexity reduction, i.e. representation of Volterra cubic kernels with few parameters.
A tensor-based volterra series black-box nonlinear system identification and simulation framework. Tensors are a multi-linear generalization of matrices to their d-way counterparts, and are receiving intense interest recently due to their natural representation of high-dimensional data and the availability of fast tensor decomposition algorithms. Given the input-output data of a nonlinear system/circuit, this paper presents a nonlinear model identification and simulation framework built on top of Volterra series and its seamless integration with tensor arithmetic. By exploiting partially-symmetric polyadic decompositions of sparse Toeplitz tensors, the proposed framework permits a pleasantly scalable way to incorporate high-order Volterra kernels. Such an approach largely eludes the curse of dimensionality and allows computationally fast modeling and simulation beyond weakly nonlinear systems. The black-box nature of the model also hides structural information of the system/circuit and encapsulates it in terms of compact tensors. Numerical examples are given to verify the efficacy, efficiency and generality of this tensor-based modeling and simulation framework.
Model Reduction and Simulation of Nonlinear Circuits via Tensor Decomposition Model order reduction of nonlinear circuits (especially highly nonlinear circuits), has always been a theoretically and numerically challenging task. In this paper we utilize tensors (namely, a higher order generalization of matrices) to develop a tensor-based nonlinear model order reduction (TNMOR) algorithm for the efficient simulation of nonlinear circuits. Unlike existing nonlinear model order reduction methods, in TNMOR high-order nonlinearities are captured using tensors, followed by decomposition and reduction to a compact tensor-based reducedorder model. Therefore, TNMOR completely avoids the dense reduced-order system matrices, which in turn allows faster simulation and a smaller memory requirement if relatively lowrank approximations of these tensors exist. Numerical experiments on transient and periodic steady-state analyses confirm the superior accuracy and efficiency of TNMOR, particularly in highly nonlinear scenarios.
Modelling and simulation of autonomous oscillators with random parameters Abstract: We consider periodic problems of autonomous systems of ordinary differential equations or differential algebraic equations. To quantify uncertainties of physical parameters, we introduce random variables in the systems. Phase conditions are required to compute the resulting periodic random process. It follows that the variance of the process depends on the choice of the phase condition. We derive a necessary condition for a random process with a minimal total variance by the calculus of variations. A corresponding numerical method is constructed based on the generalised polynomial chaos. We present numerical simulations of two test examples.
Reversible statistical max/min operation: concept and applications to timing The increasing significance of variability in modern sub-micron manufacturing process has led to the development and use of statistical techniques for chip timing analysis and optimization. Statistical timing involves fundamental operations like statistical-add, sub, max and min to propagate timing information (modeled as random variables with known probability distributions) through a timing graph model of a chip design. Although incremental timing during optimization updates timing information of only certain parts of the timing-graph, lack of established reversible statistical max or min techniques forces more-than-required computations. This paper describes the concept of reversible statistical max and min for correlated Gaussian random variables, and suggests potential applications to statistical timing. A formal proof is presented to establish the uniqueness of reversible statistical max. Experimental results show run-time savings when using the presented technique in the context of chipslack computation during incremental timing optimization.
Rapid method to account for process variation in full-chip capacitance extraction Full-chip capacitance extraction programs based on lookup techniques, such as HILEX/CUP , can be enhanced to rigorously account for process variations in the dimensions of very large scale integration interconnect wires with only modest additional computational effort. HILEX/CUP extracts interconnect capacitance from layout using analytical models with reasonable accuracy. These extracted capacitances are strictly valid only for the nominal interconnect dimensions; the networked nature of capacitive relationships in dense, complex interconnect structures precludes simple extrapolations of capacitance with dimensional changes. However, the derivatives, with respect to linewidth variation of the analytical models, can be accumulated along with the capacitance itself for each interacting pair of nodes. A numerically computed derivative with respect to metal and dielectric layer thickness variation can also be accumulated. Each node pair's extracted capacitance and its gradient with respect to linewidth and thickness variation on each metal and dielectric layer can be stored in a file. Thus, instead of storing a scalar value for each extracted capacitance, a vector of 3I+1 values will be stored for capacitance and its gradient, where I is the number of metal layers. Subsequently, this gradient information can be used during circuit simulation in conjunction with any arbitrary vector of interconnect process variations to perform sensitivity analysis of circuit performance.
A Stochastic Collocation Method for Elliptic Partial Differential Equations with Random Input Data Abstract In this paper we propose and analyze a Stochastic-Collocation method to solve elliptic Partial Difierential Equations with random,coe‐cients and forcing terms (input data of the model). The input data are assumed to depend on a flnite number of random,variables. The method consists in a Galerkin approximation in space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space and naturally leads to the solution of uncoupled deterministic prob- lems as in the Monte Carlo approach. It can be seen as a generalization of the Stochastic Galerkin method proposed in [Babu• ska -Tempone-Zouraris, SIAM J. Num. Anal. 42(2004)] and allows one to treat easily a wider range of situations, such as: input data that depend non-linearly on the random variables, difiusivity coe‐cients with unbounded second moments , random variables that are correlated or have unbounded support. We provide a rigorous convergence analysis and demonstrate exponential con- vergence of the \probability error" with respect of the number of Gauss points in each direction in the probability space, under some regularity assumptions on the random,input data. Numerical examples show the efiectiveness of the method. Key words: Collocation method, stochastic PDEs, flnite elements, un- certainty quantiflcation, exponential convergence. AMS subject classiflcation: 65N35, 65N15, 65C20
Efficient approximation of random fields for numerical applications This article is dedicated to the rapid computation of separable expansions for the approximation of random fields. We consider approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. Especially, we provide an a posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples are provided to validate and quantify the presented methods. Copyright (c) 2015 John Wiley & Sons, Ltd.
A model of fuzzy linguistic IRS based on multi-granular linguistic information An important question in IRSs is how to facilitate the IRS-user interaction, even more when the complexity of the fuzzy query language makes difficult to formulate user queries. The use of linguistic variables to represent the input and output information in the retrieval process of IRSs significantly improves the IRS-user interaction. In the activity of an IRS, there are aspects of different nature to be assessed, e.g., the relevance of documents, the importance of query terms, etc. Therefore, these aspects should be assessed with different uncertainty degrees, i.e., using several label sets with different granularity of uncertainty.
Virus propagation with randomness. Viruses are organisms that need to infect a host cell in order to reproduce. The new viruses leave the infected cell and look for other susceptible cells to infect. The mathematical models for virus propagation are very similar to population and epidemic models, and involve a relatively large number of parameters. These parameters are very difficult to establish with accuracy, while variability in the cell and virus populations and measurement errors are also to be expected. To deal with this issue, we consider the parameters to be random variables with given distributions. We use a non-intrusive variant of the polynomial chaos method to obtain statistics from the differential equations of two different virus models. The equations to be solved remain the same as in the deterministic case; thus no new computer codes need to be developed. Some examples are presented.
Efficient Decision-Making Scheme Based on LIOWAD. A new decision making method called linguistic induced ordered weighted averaging distance (LIOWAD) operator by using induced aggregation operators and linguistic information in the Hamming distance. This aggregation operator provides a parameterized family of linguistic aggregation operators that includes the maximum distance, the minimum distance, the linguistic normalized Hamming distance, the linguistic weighted Hamming distance and the linguistic ordered weighted averaging distance, among others. So give special attention to the analysis of different particular types of LIOWAD operators. End the paper with an application of the new approach in a decision making problem about selection of investments under linguistic environment.
1.10204
0.10408
0.10408
0.10408
0.10408
0.034733
0.015324
0.001333
0.000142
0.000008
0
0
0
0
Overview of SHVC: Scalable Extensions of the High Efficiency Video Coding Standard This paper provides an overview of Scalable High efficiency Video Coding (SHVC), the scalable extensions of the High Efficiency Video Coding (HEVC) standard, published in the second version of HEVC. In addition to the temporal scalability already provided by the first version of HEVC, SHVC further provides spatial, signal-to-noise ratio, bit depth, and color gamut scalability functionalities, as well as combinations of any of these. The SHVC architecture design enables SHVC implementations to be built using multiple repurposed single-layer HEVC codec cores, with the addition of interlayer reference picture processing modules. The general multilayer high-level syntax design common to all multilayer HEVC extensions, including SHVC, MV-HEVC, and 3D HEVC, is described. The interlayer reference picture processing modules, including texture and motion resampling and color mapping, are also described. Performance comparisons are provided for SHVC versus simulcast HEVC and versus the scalable video coding extension to H.264/advanced video coding.
Motion Hooks for the Multiview Extension of HEVC MV-HEVC refers to the multiview extension of High Efficiency Video Coding (HEVC). At the time of writing, MV-HEVC was being developed by the Joint Collaborative Team on 3D Video Coding Extension Development (JCT-3V) of International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC) Moving Picture Experts Group and ITU-T VCEG. Before HEVC itself was technically finalized in January 2013, the development of MV-HEVC had already started and it was decided that MV-HEVC would only contain high-level syntax changes compared with HEVC, i.e., no changes to block-level processes, to enable the reuse of the first-generation HEVC decoder hardware as is for constructing an MV-HEVC decoder with only firmware changes corresponding to the high-level syntax part of the codec. Consequently, any block-level process that is not necessary for HEVC itself but on the other hand is useful for MV-HEVC can only be enabled through so-called hooks. Motion hooks refer to techniques that do not have a significant impact on the HEVC single-view version 1 codec and can mainly improve MV-HEVC. This paper presents techniques for efficient MV-HEVC coding by introducing hooks into the HEVC design to accommodate inter-view prediction in MV-HEVC. These hooks relate to motion prediction, hence named motion hooks. Some of the motion hooks developed by the authors have been adopted into HEVC during its finalization. Simulation results show that the proposed motion hooks provide on average 4% of bitrate reduction for the views coded with inter-view prediction.
Advanced residual prediction enhancement for 3D-HEVC Advanced residual prediction (ARP) is an efficient coding tool in 3D extension of HEVC (3D-HEVC) by exploiting the residual correlation between views. In the early version of ARP, when current prediction unit (PU) has a temporal reference picture, a reference block is firstly identified by a disparity vector and the residual predictor is then produced by aligning the motion information associated with current PU at the current view for motion compensation in the reference view. However, ARP is not allowed when current PU is predicted from an inter-view reference picture. Furthermore, the motion alignment during the ARP is done in a PU level, thus may not be good enough. In this paper, an enhanced ARP scheme is proposed to first extend ARP to the prediction of inter-view residual and then extending the motion alignment to a block level (which can be lower than PU). The proposed method has been partially adopted by 3D-HEVC. Experimental results demonstrate that the proposed scheme achieves 1.3 ~ 4.2% BD rate reduction for non-base views when compared to 3D-HEVC anchor with ARP enabled.
H.265 Video Capacity Over Beyond-4g Networks Long Term Evolution (LTE) has been standardized by the 3GPP consortium since 2008 in 3GPP Release 8, with 3GPP Release 12 being the latest iteration of LTE Advanced (LTE-A), which was finalized in March 2015. High Efficiency Video Coding (H. 265) has been standardized by MPEG since 2012 and is the Video Compression technology targeted to deliver High-Definition (HD) and Ultra High-Definition (UHD) Video Content to users. With video traffic projected to represent the lion's share of mobile data traffic, providing users with high Quality of Experience (QoE) is key to designing 4G systems and future 5G systems. In this paper, we present a cross-layer scheduling framework which delivers frames to unicast video users by exploiting the encoding features of H. 265. We extract information on frame references within the coded video bitstream to determine which frames have higher utility for the H. 265 decoder located at the user's device and evaluate the performances of best-effort and video users in 4G networks using finite buffer traffic models. Our results demonstrate that there is significant potential to improve the QoE of all users compared to the baseline Proportional Fair method by adding media-awareness in the scheduling entity at the Medium Access Control (MAC) layer of a Radio Access Network (RAN).
Model-based intra coding for depth maps in 3D video using a depth lookup table 3D Video is a new technology, which requires the transmission of depth data alongside conventional 2D video. The additional depth information allows to synthesize arbitrary viewpoints at the receiver and enables adaptation of the perceived depth impression and driving of multi-view auto-stereoscopic displays. In contrast to natural video signals, depth maps are characterized by piecewise smooth regions bounded by sharp edges along depth discontinuities. Conventional video coding methods tend to introduce ringing artifacts along these depth discontinuities, which lead to visually disturbing geometric distortions in the view synthesis process. Preserving the described signal characteristics of depth maps is therefore a crucial requirement for new depth coding algorithms. In this paper a novel model-based intra-coding mode is presented, which works as an addition to conventional transform-based intra coding tools. The proposed intra coding mode yields up to 42% BD-rate savings in terms of depth rate and up to 2:5% in terms of total rate. The average bitrate savings are approximately 24% for depth rate and 1.5% for the total rate including texture and depth.
Adaptive Bitrate Selection: A Survey. HTTP adaptive streaming (HAS) is the most recent attempt regarding video quality adaptation. It enables cheap and easy to implement streaming technology without the need for a dedicated infrastructure. By using a combination of TCP and HTTP it has the advantage of reusing all the existing technologies designed for ordinary web. Equally important is that HAS traffic passes through firewalls and wor...
Panorama view with spatiotemporal occlusion compensation for 3D video coding. The future of novel 3D display technologies largely depends on the design of efficient techniques for 3D video representation and coding. Recently, multiple view plus depth video formats have attracted many research efforts since they enable intermediate view estimation and permit to efficiently represent and compress 3D video sequences. In this paper, we present spatiotemporal occlusion compensation with panorama view (STOP), a novel 3D video coding technique based on the creation of a panorama view and occlusion coding in terms of spatiotemporal offsets. The panorama picture represents the most of the visual information acquired from multiple views using a single virtual view, characterized by a larger field of view. Encoding the panorama video with state-of-the-art HECV and representing occlusions with simple spatiotemporal ancillary information STOP achieves high-compression ratio and good visual quality with competitive results with respect to competing techniques. Moreover, STOP enables free viewpoint 3D TV applications whilst allowing legacy display to get a bidimensional service using a standard video codec and simple cropping operations.
Application-level network emulation: the EmuSocket toolkit EmuSocket is a portable and flexible network emulator that can easily be configured to mimic the communication characteristics, in terms of bandwidth and delay, that occur with low-performance networks. The emulator works with Java applications by intercepting and perturbing application traffic at the socket API level. As traffic shaping takes place in the user-space by means of an instrumented socket implementation, using the toolkit does not require a modified interpreter. The way the emulator perturbs the communication preserves the TCP connection-oriented byte stream semantics.
Measuring user experience: complementing qualitative and quantitative assessment The paper describes an investigation into the relationship between in development expert assessment of user experience quality for mobile phones and subsequent usage figures. It gives an account of initial attempts to understand the correlation of a measure of quality across a range of mobile devices with usage data obtained from 1 million users. It outlines the initial indicative results obtained and how the approach was modified to be used to contribute to business strategy. The study shows that a lack of a good level of user experience quality is a barrier to adoption and use of mobile voice and infotainment services and outlines the learning that allowed the user experience team to build consensus within the team and with senior management stakeholders.
Data compression and harmonic analysis In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon's R(D) theory in the case of Gaussian stationary processes, which says that transforming into a Fourier basis followed by block coding gives an optimal lossy compression technique; practical developments like transform-based image compression have been inspired by this result. In this paper we also discuss connections perhaps less familiar to the information theory community, growing out of the field of harmonic analysis. Recent harmonic analysis constructions, such as wavelet transforms and Gabor transforms, are essentially optimal transforms for transform coding in certain settings. Some of these transforms are under consideration for future compression standards. We discuss some of the lessons of harmonic analysis in this century. Typically, the problems and achievements of this field have involved goals that were not obviously related to practical data compression, and have used a language not immediately accessible to outsiders. Nevertheless, through an extensive generalization of what Shannon called the “sampling theorem”, harmonic analysis has succeeded in developing new forms of functional representation which turn out to have significant data compression interpretations. We explain why harmonic analysis has interacted with data compression, and we describe some interesting recent ideas in the field that may affect data compression in the future
Variation-aware performance verification using at-speed structural test and statistical timing Meeting the tight performance specifications mandated by the customer is critical for contract manufactured ASICs. To address this, at speed test has been employed to detect subtle delay failures in manufacturing. However, the increasing process spread in advanced nanometer ASICs poses considerable challenges to predicting hardware performance from timing models. Performance verification in the presence of process variation is difficult because the critical path is no longer unique. Different paths become frequency limiting in different process corners. In this paper, we present a novel variation-aware method based on statistical timing to select critical paths for structural test. Node criticalities are computed to determine the probabilities of different circuit nodes being on the critical path across process variation. Moreover, path delays are projected into different process corners using their linear delay function forms. Experimental results for three multimillion gate ASICs demonstrate the effectiveness of our methods.
The Wiener--Askey Polynomial Chaos for Stochastic Differential Equations We present a new method for solving stochastic differential equations based on Galerkin projections and extensions of Wiener's polynomial chaos. Specifically, we represent the stochastic processes with an optimum trial basis from the Askey family of orthogonal polynomials that reduces the dimensionality of the system and leads to exponential convergence of the error. Several continuous and discrete processes are treated, and numerical examples show substantial speed-up compared to Monte Carlo simulations for low dimensional stochastic inputs.
A method for multiple attribute decision making with incomplete weight information under uncertain linguistic environment The multi-attribute decision making problems are studied, in which the information about the attribute values take the form of uncertain linguistic variables. The concept of deviation degree between uncertain linguistic variables is defined, and ideal point of uncertain linguistic decision making matrix is also defined. A formula of possibility degree for the comparison between uncertain linguistic variables is proposed. Based on the deviation degree and ideal point of uncertain linguistic variables, an optimization model is established, by solving the model, a simple and exact formula is derived to determine the attribute weights where the information about the attribute weights is completely unknown. For the information about the attribute weights is partly known, another optimization model is established to determine the weights, and then to aggregate the given uncertain linguistic decision information, respectively. A method based on possibility degree is given to rank the alternatives. Finally, an illustrative example is also given.
SPECO: Stochastic Perturbation based Clock tree Optimization considering temperature uncertainty Modern computing system applications or workloads can bring significant non-uniform temperature gradient on-chip, and hence can cause significant temperature uncertainty during clock-tree synthesis. Existing designs of clock-trees have to assume a given time-invariant worst-case temperature map but cannot deal with a set of temperature maps under a set of workloads. For robust clock-tree synthesis considering temperature uncertainty, this paper presents a new problem formulation: Stochastic PErturbation based Clock Optimization (SPECO). In SPECO algorithm, one nominal clock-tree is pre-synthesized with determined merging points. The impact from the stochastic temperature variation is modeled by perturbation (or small physical displacement) of merging points to offset the induced skews. Because the implementation cost is reduced but the design complexity is increased, the determination of optimal positions of perturbed merging points requires a computationally efficient algorithm. In this paper, one Non-Monte-Carlo (NMC) method is deployed to generate skew and skew variance by one-time analysis when a set of stochastic temperature maps is already provided. Moreover, one principal temperature-map analysis is developed to reduce the design complexity by clustering correlated merging points based on the subspace of the correlation matrix. As a result, the new merging points can be efficiently determined level by level with both skew and its variance reduced. The experimental results show that our SPECO algorithm can effectively reduce the clock-skew and its variance under a number of workloads with minimized wire-length overhead and computational cost.
1.020412
0.020084
0.020084
0.02
0.010042
0.005
0.00004
0.000009
0.000001
0
0
0
0
0
Relaxed maximum a posteriori fault identification We consider the problem of estimating a pattern of faults, represented as a binary vector, from a set of measurements. The measurements can be noise corrupted real values, or quantized versions of noise corrupted signals, including even 1-bit (sign) measurements. Maximum a posteriori probability (MAP) estimation of the fault pattern leads to a difficult combinatorial optimization problem, so we propose a variation in which an approximate maximum a posteriori probability estimate is found instead, by solving a convex relaxation of the original problem, followed by rounding and simple local optimization. Our method is extremely efficient, and scales to very large problems, involving thousands (or more) of possible faults and measurements. Using synthetic examples, we show that the method performs extremely well, both in identifying the true fault pattern, and in identifying an ambiguity group, i.e., a set of alternate fault patterns that explain the observed measurements almost as well as our estimate.
Sensor Selection via Convex Optimization We consider the problem of choosing a set of k sensor measurements, from a set of m possible or potential sensor measurements, that minimizes the error in estimating some parameters. Solving this problem by evaluating the performance for each of the (m k) possible choices of sensor measurements is not practical unless m and k are small. In this paper, we describe a heuristic, based on convex optimization, for approximately solving this problem. Our heuristic gives a subset selection as well as a bound on the best performance that can be achieved by any selection of k sensor measurements. There is no guarantee that the gap between the performance of the chosen subset and the performance bound is always small; but numerical experiments suggest that the gap is small in many cases. Our heuristic method requires on the order of m 3 operations; for m= 1000 possible sensors, we can carry out sensor selection in a few seconds on a 2-GHz personal computer.
Highly Robust Error Correction byConvex Programming This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal (a block of pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g., quantization errors). We show that if one encodes the information as where is a suitable coding matrix, there are two decoding schemes that allow the recovery of the block of pieces of information with nearly the same accuracy as if no gross errors occurred upon transmission (or equivalently as if one had an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second-order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well.
Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing The replica method is a nonrigorous but well-known technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an $n$-dimensional vector “decouples” as $n$ scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdú. The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, least absolute shrinkage and selection operator (LASSO), linear estimation with thresholding, and zero norm-regularized estimation. In the case of LASSO estimation, the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation, it reduces to a hard threshold. Among other benefits, the replica method provides a computationally tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability.
Optimal design of a CMOS op-amp via geometric programming We describe a new method for determining component values and transistor dimensions for CMOS operational amplifiers (op-amps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result, the amplifier design problem can be expressed as a special form of optimization problem called geometric programming, for which very efficient global optimization methods have been developed. As a consequence we can efficiently determine globally optimal amplifier designs or globally optimal tradeoffs among competing performance measures such as power, open-loop gain, and bandwidth. Our method, therefore, yields completely automated sizing of (globally) optimal CMOS amplifiers, directly from specifications. In this paper, we apply this method to a specific widely used operational amplifier architecture, showing in detail how to formulate the design problem as a geometric program. We compute globally optimal tradeoff curves relating performance measures such as power dissipation, unity-gain bandwidth, and open-loop gain. We show how the method can he used to size robust designs, i.e., designs guaranteed to meet the specifications for a variety of process conditions and parameters
Linear transformations and Restricted Isometry Property The restricted isometry property (RIP) introduced by Candes and Tao is a fundamental property in compressed sensing theory. It says that if a sampling matrix satisfies the RIP of certain order proportional to the sparsity of the signal, then the original signal can be reconstructed even if the sampling matrix provides a sample vector which is much smaller in size than the original signal. This short note addresses the problem of how a linear transformation will affect the RIP. This problem arises from the consideration of extending the sensing matrix and the use of compressed sensing in different bases. As an application, the result is applied to the redundant dictionary setting in compressed sensing.
Construction of a Large Class of Deterministic Sensing Matrices That Satisfy a Statistical Isometry Property Compressed Sensing aims to capture attributes of k-sparse signals using very few measurements. In the standard compressed sensing paradigm, the N ?? C measurement matrix ?? is required to act as a near isometry on the set of all k-sparse signals (restricted isometry property or RIP). Although it is known that certain probabilistic processes generate N ?? C matrices that satisfy RIP with high probability, there is no practical algorithm for verifying whether a given sensing matrix ?? has this property, crucial for the feasibility of the standard recovery algorithms. In contrast, this paper provides simple criteria that guarantee that a deterministic sensing matrix satisfying these criteria acts as a near isometry on an overwhelming majority of k-sparse signals; in particular, most such signals have a unique representation in the measurement domain. Probability still plays a critical role, but it enters the signal model rather than the construction of the sensing matrix. An essential element in our construction is that we require the columns of the sensing matrix to form a group under pointwise multiplication. The construction allows recovery methods for which the expected performance is sub-linear in C, and only quadratic in N, as compared to the super-linear complexity in C of the Basis Pursuit or Matching Pursuit algorithms; the focus on expected performance is more typical of mainstream signal processing than the worst case analysis that prevails in standard compressed sensing. Our framework encompasses many families of deterministic sensing matrices, including those formed from discrete chirps, Delsarte-Goethals codes, and extended BCH codes.
Sparse representations in unions of bases The purpose of this correspondence is to generalize a result by Donoho and Huo and Elad and Bruckstein on sparse representations of signals in a union of two orthonormal bases for RN. We consider general (redundant) dictionaries for RN, and derive sufficient conditions for having unique sparse representations of signals in such dictionaries. The special case where the dictionary is given by the union of L≥2 orthonormal bases for RN is studied in more detail. In particular, it is proved that the result of Donoho and Huo, concerning the replacement of the ℓ0 optimization problem with a linear programming problem when searching for sparse representations, has an analog for dictionaries that may be highly redundant.
A generic quantitative relationship between quality of experience and quality of service Quality of experience ties together user perception, experience, and expectations to application and network performance, typically expressed by quality of service parameters. Quantitative relationships between QoE and QoS are required in order to be able to build effective QoE control mechanisms onto measurable QoS parameters. Against this background, this article proposes a generic formula in which QoE and QoS parameters are connected through an exponential relationship, called IQX hypothesis. The formula relates changes of QoE with respect to QoS to the current level of QoE, is simple to match, and its limit behaviors are straightforward to interpret. It validates the IQX hypothesis for streaming services, where QoE in terms of Mean Opinion Scores is expressed as functions of loss and reordering ratio, the latter of which is caused by jitter. For web surfing as the second application area, matchings provided by the IQX hypothesis are shown to outperform previously published logarithmic functions. We conclude that the IQX hypothesis is a strong candidate to be taken into account when deriving relationships between QoE and QoS parameters.
Image Filtering, Edge Detection, and Edge Tracing Using Fuzzy Reasoning We characterize the problem of detecting edges in images as a fuzzy reasoning problem. The edge detection problem is divided into three stages: filtering, detection, and tracing. Images are filtered by applying fuzzy reasoning based on local pixel characteristics to control the degree of Gaussian smoothing. Filtered images are then subjected to a simple edge detection algorithm which evaluates the edge fuzzy membership value for each pixel, based on local image characteristics. Finally, pixels having high edge membership are traced and assembled into structures, again using fuzzy reasoning to guide the tracing process. The filtering, detection, and tracing algorithms are tested on several test images. Comparison is made with a standard edge detection technique.
A fast approach for overcomplete sparse decomposition based on smoothed l0 norm In this paper, a fast algorithm for overcomplete sparse decomposition, called SL0, is proposed. The algorithm is essentially a method for obtaining sparse solutions of underdetermined systems of linear equations, and its applications include under-determined sparse component analysis (SCA), atomic decomposition on overcomplete dictionaries, compressed sensing, and decoding real field codes. Contrary to previous methods, which usually solve this problem by minimizing the l1 norm using linear programming (LP) techniques, our algorithm tries to directly minimize the l0 norm. It is experimentally shown that the proposed algorithm is about two to three orders of magnitude faster than the state-of-the-art interior-point LP solvers, while providing the same (or better) accuracy.
Improvement of Auto-Regressive Integrated Moving Average models using Fuzzy logic and Artificial Neural Networks (ANNs) Time series forecasting is an active research area that has drawn considerable attention for applications in a variety of areas. Auto-Regressive Integrated Moving Average (ARIMA) models are one of the most important time series models used in financial market forecasting over the past three decades. Recent research activities in time series forecasting indicate that two basic limitations detract from their popularity for financial time series forecasting: (a) ARIMA models assume that future values of a time series have a linear relationship with current and past values as well as with white noise, so approximations by ARIMA models may not be adequate for complex nonlinear problems; and (b) ARIMA models require a large amount of historical data in order to produce accurate results. Both theoretical and empirical findings have suggested that integration of different models can be an effective method of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, ARIMA models are integrated with Artificial Neural Networks (ANNs) and Fuzzy logic in order to overcome the linear and data limitations of ARIMA models, thus obtaining more accurate results. Empirical results of financial markets forecasting indicate that the hybrid models exhibit effectively improved forecasting accuracy so that the model proposed can be used as an alternative to financial market forecasting tools.
Process variability-aware transient fault modeling and analysis Due to reduction in device feature size and supply voltage, the sensitivity of digital systems to transient faults is increasing dramatically. As technology scales further, the increase in transistor integration capacity also leads to the increase in process and environmental variations. Despite these difficulties, it is expected that systems remain reliable while delivering the required performance. Reliability and variability are emerging as new design challenges, thus pointing to the importance of modeling and analysis of transient faults and variation sources for the purpose of guiding the design process. This work presents a symbolic approach to modeling the effect of transient faults in digital circuits in the presence of variability due to process manufacturing. The results show that using a nominal case and not including variability effects, can underestimate the SER by 5% for the 50% yield point and by 10% for the 90% yield point.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.043875
0.025125
0.0125
0.007143
0.004167
0.0004
0.000042
0.000001
0
0
0
0
0
0
An expert system prototype for inventory capacity planning: an approximate reasoning approach An approximate reasoning framework is suggested for the development of an expert system prototype to aid management in planning inventory capacities. The development is considered to be a stage that comes after the analysis of a stochastic model. Such a model would provide the requisite insight and knowledge about the inventory system under specific assumptions. As a consequence, the model builder(s) would act as expert(s). The restructuring process from the stochastic model into the approximate reasoning framework is described in a case study analysis for a Markovian production model. The stochastic model considers a relatively simplified production process: one machine, constant production rate, a compound Poisson demand process for the product together with the reliability feature comprising the machine failure process and the ensuing repair action. In this context, the authors propose an approximate reasoning framework and describe (1) the identification of the managerial decision-making rules, which usually contain uncertain (vague, ambiguous, fuzzy) linguistic terms; and (2) the specification of membership functions that represent the meaning of such linguistic terms within context-dependent domains of concern. They then define a new universal logic incorporating these rules and functions and apply it to inventory capacity planning. Two case examples and a simulation experiment consisting of 21 cases are summarized with a discussion of results.
Alternative Logics for Approximate Reasoning in Expert Systems: A Comparative Study In this paper we report the results of an empirical study to compare eleven alternative logics for approximate reasoning in expert systems. The several “compositional inference” axiom systems (described below) were used in an expert knowledge-based system. The quality of the system outputs—fuzzy linguistic phrases—were compared in terms of correctness and precision (non-vagueness).
An Approach to Inference in Approximate Reasoning
Measures of similarity among fuzzy concepts: A comparative analysis Many measures of similarity among fuzzy sets have been proposed in the literature, and some have been incorporated into linguistic approximation procedures. The motivations behind these measures are both geometric and set-theoretic. We briefly review 19 such measures and compare their performance in a behavioral experiment. For crudely categorizing pairs of fuzzy concepts as either “similar” or “dissimilar,” all measures performed well. For distinguishing between degrees of similarity or dissimilarity, certain measures were clearly superior and others were clearly inferior; for a few subjects, however, none of the distance measures adequately modeled their similarity judgments. Measures that account for ordering on the base variable proved to be more highly correlated with subjects' actual similarity judgments. And, surprisingly, the best measures were ones that focus on only one “slice” of the membership function. Such measures are easiest to compute and may provide insight into the way humans judge similarity among fuzzy concepts.
Semantics of Context-Free Languages Meaning" may be assigned to a string in a context-free language by defining "at- tributes" of the symbols in a derivation tree for that string. The attributes can be de- fined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are "synthe- sized", i.e., defined solely in terms of attributes of the descendants of the correspond- ing nonterminal symbol, while other attributes are "inherited", i.e., defined in terms of attributes of the ancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature. A simple technique for specifying the "meaning" of languages defined by context-free grammars is introduced in Section 1 of this paper, and its basic mathematical properties are investigated in Sections 2 and 3. An example which indicates how the technique can be applied to the formal definition of programming languages is described in Section 4, and finally, Section 5 contains a somewhat biased comparison of the present method to other known techniques for semantic definition. The discussion in this paper is oriented primarily towards programming languages, but the same methods appear to be relevant also in the study of natural languages. 1. Introduction. Let us st/ppose that we want to give a precise defini- tion of binary notation for numbers. This can be done in many ways, and in this section we want to consider a manner o f definition which can be gen- eralized so that the meaning of other notations can be expressed in the same way. One such way to define binary notation is to base a definition on
Parameterized block-based statistical timing analysis with non-Gaussian parameters, nonlinear delay functions Variability of process parameters makes prediction of digital circuit timing characteristics an important and challenging problem in modern chip design. Recently, statistical static timing analysis (statistical STA) has been proposed as a solution. Unfortunately, the existing approaches either do not consider explicit gate delay dependence on process parameters (Liou, et al., 2001), (Orshansky, et al., 2002), (Devgan, et al., 2003), (Agarwal, et al., 2003) or restrict analysis to linear Gaussian parameters only (Visweswariah, et al., 2004), (Chang, et al., 2003). Here the authors extended the capabilities of parameterized block-based statistical STA (Visweswariah, et al., 2004) to handle nonlinear function of delays and non-Gaussian parameters, while retaining maximum efficiency of processing linear Gaussian parameters. The novel technique improves accuracy in predicting circuit timing characteristics and retains such benefits of parameterized block-based statistical STA as an incremental mode of operation, computation of criticality probabilities and sensitivities to process parameter variations. The authors' technique was implemented in an industrial statistical timing analysis tool. The experiments with large digital blocks showed both efficiency and accuracy of the proposed technique.
Fuzzy logic in control systems: fuzzy logic controller. I.
Stable recovery of sparse overcomplete representations in the presence of noise Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimal-sparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.
Statistical Timing for Parametric Yield Prediction of Digital Integrated Circuits Uncertainty in circuit performance due to manufacturing and environmental variations is increasing with each new generation of technology. It is therefore important to predict the performance of a chip as a probabilistic quantity. This paper proposes three novel path-based algorithms for statistical timing analysis and parametric yield prediction of digital integrated circuits. The methods have been implemented in the context of the EinsTimer static timing analyzer. The three methods are complementary in that they are designed to target different process variation conditions that occur in practice. Numerical results are presented to study the strengths and weaknesses of these complementary approaches. Timing analysis results in the face of statistical temperature and Vdd variations are presented on an industrial ASIC part on which a bounded timing methodology leads to surprisingly wrong results
Uncertainty measures for interval type-2 fuzzy sets Fuzziness (entropy) is a commonly used measure of uncertainty for type-1 fuzzy sets. For interval type-2 fuzzy sets (IT2 FSs), centroid, cardinality, fuzziness, variance and skewness are all measures of uncertainties. The centroid of an IT2 FS has been defined by Karnik and Mendel. In this paper, the other four concepts are defined. All definitions use a Representation Theorem for IT2 FSs. Formulas for computing the cardinality, fuzziness, variance and skewness of an IT2 FS are derived. These definitions should be useful in IT2 fuzzy logic systems design using the principles of uncertainty, and in measuring the similarity between two IT2 FSs.
Topological approaches to covering rough sets Rough sets, a tool for data mining, deal with the vagueness and granularity in information systems. This paper studies covering-based rough sets from the topological view. We explore the topological properties of this type of rough sets, study the interdependency between the lower and the upper approximation operations, and establish the conditions under which two coverings generate the same lower approximation operation and the same upper approximation operation. Lastly, axiomatic systems for the lower approximation operation and the upper approximation operation are constructed.
On Generalized Induced Linguistic Aggregation Operators In this paper, we define various generalized induced linguistic aggregation operators, including eneralized induced linguistic ordered weighted averaging (GILOWA) operator, generalized induced linguistic ordered weighted geometric (GILOWG) operator, generalized induced uncertain linguistic ordered weighted averaging (GIULOWA) operator, generalized induced uncertain linguistic ordered weighted geometric (GIULOWG) operator, etc. Each object processed by these operators consists of three components, where the first component represents the importance degree or character of the second component, and the second component is used to induce an ordering, through the first component, over the third components which are linguistic variables (or uncertain linguistic variables) and then aggregated. It is shown that the induced linguistic ordered weighted averaging (ILOWA) operator and linguistic ordered weighted averaging (LOWA) operator are the special cases of the GILOWA operator, induced linguistic ordered weighted geometric (ILOWG) operator and linguistic ordered weighted geometric (LOWG) operator are the special cases of the GILOWG operator, the induced uncertain linguistic ordered weighted averaging (IULOWA) operator and uncertain linguistic ordered weighted averaging (ULOWA) operator are the special cases of the GIULOWA operator, and that the induced uncertain linguistic ordered weighted geometric (IULOWG) operator and uncertain LOWG operator are the special cases of the GILOWG operator.
Application of FMCDM model to selecting the hub location in the marine transportation: A case study in southeastern Asia Hub location selection problems have become one of the most popular and important issues not only in the truck transportation and the air transportation, but also in the marine transportation. The main focus of this paper is on container transshipment hub locations in southeastern Asia. Transshipment is the fastest growing segment of the containerport market, resulting in significant scope to develop new transshipment terminal capacity to cater for future expected traffic flows. A shipping carrier not only calculates transport distances and operation costs, but also evaluates some qualitative conditions for existing hub locations and then selects an optimal container transshipment hub location in the region. In this paper, a fuzzy multiple criteria decision-making (FMCDM) model is proposed for evaluating and selecting the container transshipment hub port. Finally, the utilization of the proposed FMCDM model is demonstrated with a case study of hub locations in southeastern Asia. The results show that the FMCDM model proposed in this paper can be used to explain the evaluation and decision-making procedures of hub location selection well. In addition, the preferences are calculated for existing hub locations and these are then compared with a new proposed container transshipment hub location in the region, in this instance the Port of Shanghai. Furthermore, a sensitivity analysis is performed.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.2
0.1
0.025
0.006667
0
0
0
0
0
0
0
0
0
0
Analysis Of Hierarchical B Pictures And Mctf In this paper, an investigation of H.264/MPEG4-AVC conforming coding with hierarchical B pictures is presented. We analyze the coding delay and memory requirements, describe details of an improved encoder control, and compare the coding efficiency for different coding delays. Additionally, the coding efficiency of hierarchical B picture coding is compared to that of MCTF-based coding by using identical coding structures and a similar degree of encoder optimization. Our simulation results turned out that in comparison to the widely used IBBP... structure coding gains of more than 1 dB can be achieved at the expense of an increased coding delay. Further experiments have shown that the coding efficiency gains obtained by using the additional update steps in MCTF coding are generally smaller than the losses resulting from the required open-loop encoder control.
Stanford Peer-to-Peer Multicast (SPPM) - Overview and recent extensions We review the Stanford peer-to-peer multicast (SPPM) protocol for live video streaming and report recent extensions. SPPM has been designed for low latency and robust transmission of live media by organizing peers within multiple complementary trees. The recent extensions to live streaming are time-shifted streaming, interactive region-of-interest (IRoI) streaming, and streaming to mobile devices. With time-shifting, users can choose an arbitrary beginning point for watching a stream, whereas IRoI streaming allows users to select an arbitrary region to watch within a high-spatial-resolution scene. We extend the live streaming to mobile devices by addressing challenges due to heterogeneous displays, connection speeds, and decoding capabilities.
BiToS: Enhancing BitTorrent for Supporting Streaming Applications BitTorrent (BT) in the last years has been one of the most effective mechanisms for P2P content distribution. Although BT was created for distribution of time insensitive content, in this work we try to identify what are the minimal changes needed in the BT's mechanisms in order to support streaming. The importance of this capability is that the peer will now have the ability to start enjoying the video before the complete download of the video file. This ability is particularly important in highly polluted environments, since the peer can evaluate the quality of the video content early and thus preserve its valuable resources. In a nutshell, our approach gives higher download priority to pieces that are close to be reproduced by the player. This comes in contrast to the original BT protocol, where pieces are downloaded in an out-of-order manner based solely on their rareness. In particular, our approach tries to strike the balance between downloading pieces in: (a) playing order, enabling smooth playback, and (b) the rarest first order, enabling the use of parallel downloading of pieces. In this work, we introduce three different Piece Selection mechanisms and we evaluate them through simulations based on how well they deliver streaming services to the peers.
Joint Texture And Depth Map Video Coding Based On The Scalable Extension Of H.264/Avc Depth-Image-Based Rendering (DIBR) is widely used for view synthesis in 3D video applications. Compared with traditional 2D video applications, both the texture video and its associated depth map are required for transmission in a communication system that supports DIBR. To efficiently utilize limited bandwidth, coding algorithms, e.g. the Advanced Video Coding (H.264/AVC) standard, can be adopted to compress the depth map using the 4:0:0 chroma sampling format. However, when the correlation between texture video and depth map is exploited, the compression efficiency may be improved compared with encoding them independently using H.264/AVC. A new encoder algorithm which employs Scalable Video Coding (SVC), the scalable extension of H.264/AVC, to compress the texture video and its associated depth map is proposed in this paper. Experimental results show that the proposed algorithm can provide up to 0.97 dB gain for the coded depth maps, compared with the simulcast scheme, wherein texture video and depth map are coded independently by H.264/AVC.
Peer-to-Peer Live Multicast: A Video Perspective Peer-to-peer multicast is promising for large-scale streaming video distribution over the Internet. Viewers contribute their resources to a peer-to-peer overlay network to act as relays for the media streams, and no dedicated infrastructure is required. As packets are transmitted over long, unreliable multipeer transmission paths, it is particularly challenging to achieve consistently high video q...
Effects Of Mgs Fragmentation, Slice Mode And Extraction Strategies On The Performance Of Svc With Medium-Grained Scalability This paper presents a comparison of a wide set of MGS fragmentation configurations of SVC in terms of their PSNR performance, with the slice mode on or off, using multiple extraction methods. We also propose a priority-based hierarchical extraction method which outperforms other extraction schemes for most MGS configurations. Experimental results show that splitting the MGS layer into more than five fragments, when the slice mode is on, may result in noticeable decrease in the average PSNR. It is also observed that for videos with large key frame enhancement NAL units, MGS fragmentation and/or slice mode have positive impact on the PSNR of the extracted video at low bitrates. While using slice mode without MGS fragmentation may improve the PSNR performance at low rates, it may result in uneven video quality within frames due to varying quality of slices. Therefore, we recommend combined use of up to five MGS fragments and slice mode, especially for low bitrate video applications.
Priority-based Media Delivery using SVC with RTP and HTTP streaming Media delivery, especially video delivery over mobile channels may be affected by transmission bitrate variations or temporary link interruptions caused by changes in the channel conditions or the wireless interface. In this paper, we present the use of Priority-based Media Delivery (PMD) for Scalable Video Coding (SVC) to overcome link interruptions and channel bitrate reductions in mobile networks by performing a transmission scheduling algorithm that prioritizes media data according to its importance. The proposed approach comprises a priority-based media pre-buffer to overcome periods under reduced connectivity. The PMD algorithm aims to use the same transmission bitrate and overall buffer size as the traditional streaming approach, yet is more likely to overcome interruptions and reduced bitrate periods. PMD achieves longer continuous playback than the traditional approach, avoiding disruptions in the video playout and therefore improving the video playback quality. We analyze the use of SVC with PMD in the traditional RTP streaming and in the adaptive HTTP streaming context. We show benefits of using SVC in terms of received quality during interruption and re-buffering time, i.e. the time required to fill a desired pre-buffer at the receiver. We present a quality optimization approach for PMD and show results for different interruption/bitrate-reduction scenarios.
Overview of the Multiview and 3D Extensions of High Efficiency Video Coding The High Efficiency Video Coding (HEVC) standard has recently been extended to support efficient representation of multiview video and depth-based 3D video formats. The multiview extension, MV-HEVC, allows efficient coding of multiple camera views and associated auxiliary pictures, and can be implemented by reusing single-layer decoders without changing the block-level processing modules since blo...
Improving Fairness, Efficiency, and Stability in HTTP-Based Adaptive Video Streaming With Festive Modern video players today rely on bit-rate adaptation in order to respond to changing network conditions. Past measurement studies have identified issues with today's commercial players when multiple bit-rate-adaptive players share a bottleneck link with respect to three metrics: fairness, efficiency, and stability. Unfortunately, our current understanding of why these effects occur and how they can be mitigated is quite limited. In this paper, we present a principled understanding of bit-rate adaptation and analyze several commercial players through the lens of an abstract player model consisting of three main components: bandwidth estimation, bit-rate selection, and chunk scheduling. Using framework, we identify the root causes of several undesirable interactions that arise as a consequence of overlaying the video bit-rate adaptation over HTTP. Building on these insights, we develop a suite of techniques that can systematically guide the tradeoffs between stability, fairness, and efficiency and thus lead to a general framework for robust video adaptation. We pick one concrete instance from this design space and show that it significantly outperforms today's commercial players on all three key metrics across a range of experimental scenarios.
TAPAS: A Tool for rApid Prototyping of Adaptive Streaming algorithms The central component of any adaptive video streaming system is the stream-switching controller. This paper introduces TAPAS, an open-source Tool for rApid Prototyping of Adaptive Streaming control algorithms. TAPAS is a flexible and extensible video streaming client written in python that allows to easily design and carry out experimental performance evaluations of adaptive streaming controllers without needing to write the code to download video segments, parse manifest files, and decode the video. TAPAS currently supports DASH and HLS and has been designed to minimize the CPU and memory footprint so that experiments involving a large number of concurrent video flows can be carried out using a single client machine. An adaptive streaming controller is implemented to illustrate the simplicity of the tool along with a performance evaluation which validates the tool.
Fuzzy connection admission control for ATM networks based on possibility distribution of cell loss ratio This paper proposes a connection admission control (CAC) method for asynchronous transfer mode (ATM) networks based on the possibility distribution of cell loss ratio (CLR). The possibility distribution is estimated in a fuzzy inference scheme by using observed data of the CLR. This method makes possible secure CAC, thereby guaranteeing the allowed CLR. First, a fuzzy inference method is proposed, based on a weighted average of fuzzy sets, in order to estimate the possibility distribution of the CLR. In contrast to conventional methods, the proposed inference method can avoid estimating excessively large values of the CLR. Second, the learning algorithm is considered for tuning fuzzy rules for inference. In this, energy functions are derived so as to efficiently achieve higher multiplexing gain by applying them to CAC. Because the upper bound of the CLR can easily be obtained from the possibility distribution by using this algorithm, CAC can be performed guaranteeing the allowed CLR. The simulation studies show that the proposed method can well extract the upper bound of the CLR from the observed data. The proposed method also makes possible self-compensation in real time for the case where the estimated CLR is smaller than the observed CLR. It preserves the guarantee of the CLR as much as possible in operation of ATM switches. Third, a CAC method which uses the fuzzy inference mentioned above is proposed. In the area with no observed CLR data, fuzzy rules are automatically generated from the fuzzy rules already tuned by the learning algorithm with the existing observed CLR data. Such areas exist because of the absence of experience in connections. This method can guarantee the allowed CLR in the CAC and attains a high multiplex gain as is possible. The simulation studies show its feasibility. Finally, this paper concludes with some brief discussions
Hesitant fuzzy power aggregation operators and their application to multiple attribute group decision making The hesitant fuzzy set is a useful generalization of the fuzzy set that is designed for situations in which it is difficult to determine the membership of an element to a set owing to ambiguity between a few different values. In this paper, we develop a wide range of hesitant fuzzy power aggregation operators for hesitant fuzzy information. We first introduce several power aggregation operators and then extend these operators to hesitant fuzzy environments, i.e., we introduce operators to aggregate input arguments that take the form of hesitant fuzzy sets. We demonstrate several useful properties of the operators and discuss the relationships between them. The new aggregation operators are utilized to develop techniques for multiple attribute group decision making with hesitant fuzzy information. Finally, some practical examples are provided to illustrate the effectiveness of the proposed techniques.
Fuzzy modifiers based on fuzzy relations In this paper we introduce a new type of fuzzy modifiers (i.e. mappings that transform a fuzzy set into a modified fuzzy set) based on fuzzy relations. We show how they can be applied for the representation of weakening adverbs (more or less, roughly) and intensifying adverbs (very, extremely) in the inclusive and the non-inclusive interpretation. We illustrate their use in an approximate reasoning scheme.
Effects Of Interconnect Process Variations On Signal Integrity With the development of new sub micron Very Large Scale Integration (VLSI) technologies the importance of interconnect parasitics on delay and noise has been in an ever increasing trend [1]. Consequently, the variations in interconnect parameters have a larger impact on final timing and functional yield of the product. Therefore, it is necessary to handle process variations as accurately as possible in Layout Parasitic Extraction (LPE), Static Timing (ST) and Signal Integrity (SI) in deep sub-micron designs.In this paper we analyze the sources of process variation that induce interconnect parasitic variations. We present the relatively important ones through the usage of a Response Surface Model (RSM). It was found that, in addition to metal thickness and width variation, damaged dielectric regions on the side of the metal lines are important contributions to cross-talk. We demonstrate the importance of accounting for the correlation between parameters for a given interconnect line such as interconnect line resistance and thickness. Finally we present a Monte Carlo (MC) methodology based on the RSM which can significantly reduce separation of corners and lead to tighter product specs and hence smaller die area and lower power.
1.0185
0.016464
0.016464
0.016464
0.011227
0.007853
0.003908
0.000178
0.000018
0.000002
0
0
0
0
Stagewise weak gradient pursuits Finding sparse solutions to underdetermined inverse problems is a fundamental challenge encountered in a wide range of signal processing applications, from signal acquisition to source separation. This paper looks at greedy algorithms that are applicable to very large problems. The main contribution is the development of a new selection strategy (called stagewise weak selection) that effectively selects several elements in each iteration. The new selection strategy is based on the realization that many classical proofs for recovery of sparse signals can be trivially extended to the new setting. What is more, simulation studies show the computational benefits and good performance of the approach. This strategy can be used in several greedy algorithms, and we argue for the use within the gradient pursuit framework in which selected coefficients are updated using a conjugate update direction. For this update, we present a fast implementation and novel convergence result.
Fast algorithms for nonconvex compressive sensing: MRI reconstruction from very few data Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer k-space samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourier-based algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of state-of-the-art convex methods.
Sparse Recovery by Non-convex Optimization -- Instance Optimality In this note, we address the theoretical properties of Δp, a class of compressed sensing decoders that rely on ℓp minimization with 0<p<1 to recover estimates of sparse and compressible signals from incomplete and inaccurate measurements. In particular, we extend the results of Candès, Romberg and Tao (2006) [3] and Wojtaszczyk (2009) [30] regarding the decoder Δ1, based on ℓ1 minimization, to Δp with 0<p<1. Our results are two-fold. First, we show that under certain sufficient conditions that are weaker than the analogous sufficient conditions for Δ1 the decoders Δp are robust to noise and stable in the sense that they are (2,p) instance optimal for a large class of encoders. Second, we extend the results of Wojtaszczyk to show that, like Δ1, the decoders Δp are (2,2) instance optimal in probability provided the measurement matrix is drawn from an appropriate distribution.
Simultaneously Sparse Solutions to Linear Inverse Problems with Multiple System Matrices and a Single Observation Vector A problem that arises in slice-selective magnetic resonance imaging (MRI) radio-frequency (RF) excitation pulse design is abstracted as a novel linear inverse problem with a simultaneous sparsity constraint. Multiple unknown signal vectors are to be determined, where each passes through a different system matrix and the results are added to yield a single observation vector. Given the matrices and lone observation, the objective is to find a simultaneously sparse set of unknown vectors that approximately solves the system. We refer to this as the multiple-system single-output (MSSO) simultaneous sparse approximation problem. This manuscript contrasts the MSSO problem with other simultaneous sparsity problems and conducts an initial exploration of algorithms with which to solve it. Greedy algorithms and techniques based on convex relaxation are derived and compared empirically. Experiments involve sparsity pattern recovery in noiseless and noisy settings and MRI RF pulse design.
Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set of linear measurements - L1-mini- mization methods and iterative methods (Matching Pursuits). We find a simple regularized version of Orthogonal Matching Pursuit (ROMP) which has advan- tages of both approaches: the speed and transparency of OMP and the strong uniform guarantees of L1-minimization. Our algorithm ROMP reconstructs a sparse signal in a number of iterations linear in the sparsity, and the reconstruc- tion is exact provided the linear measurements satisfy the Uniform Uncertainty Principle.
Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.
Multitask Compressive Sensing Compressive sensing (CS) is a framework whereby one performs N nonadaptive measurements to constitute a vector v isin RN used to recover an approximation u isin RM desired signal u isin RM with N << M this is performed under the assumption that u is sparse in the basis represented by the matrix Psi RMtimesM. It has been demonstrated that with appropriate design of the compressive measurements used to define v, the decompressive mapping vrarru may be performed with error ||u-u||2 2 having asymptotic properties analogous to those of the best adaptive transform-coding algorithm applied in the basis Psi. The mapping vrarru constitutes an inverse problem, often solved using l1 regularization or related techniques. In most previous research, if L > 1 sets of compressive measurements {vi}i=1,L are performed, each of the associated {ui}i=1,Lare recovered one at a time, independently. In many applications the L ldquotasksrdquo defined by the mappings virarrui are not statistically independent, and it may be possible to improve the performance of the inversion if statistical interrelationships are exploited. In this paper, we address this problem within a multitask learning setting, wherein the mapping vrarru for each task corresponds to inferring the parameters (here, wavelet coefficients) associated with the desired signal vi, and a shared prior is placed across all of the L tasks. Under this hierarchical Bayesian modeling, data from all L tasks contribute toward inferring a posterior on the hyperparameters, and once the shared prior is thereby inferred, the data from each of the L individual tasks is then employed to estimate the task-dependent wavelet coefficients. An empirical Bayesian procedure for the estimation of hyperparameters is considered; two fast inference algorithms extending the relevance vector - - machine (RVM) are developed. Example results on several data sets demonstrate the effectiveness and robustness of the proposed algorithms.
Semantics of Context-Free Languages Meaning" may be assigned to a string in a context-free language by defining "at- tributes" of the symbols in a derivation tree for that string. The attributes can be de- fined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are "synthe- sized", i.e., defined solely in terms of attributes of the descendants of the correspond- ing nonterminal symbol, while other attributes are "inherited", i.e., defined in terms of attributes of the ancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature. A simple technique for specifying the "meaning" of languages defined by context-free grammars is introduced in Section 1 of this paper, and its basic mathematical properties are investigated in Sections 2 and 3. An example which indicates how the technique can be applied to the formal definition of programming languages is described in Section 4, and finally, Section 5 contains a somewhat biased comparison of the present method to other known techniques for semantic definition. The discussion in this paper is oriented primarily towards programming languages, but the same methods appear to be relevant also in the study of natural languages. 1. Introduction. Let us st/ppose that we want to give a precise defini- tion of binary notation for numbers. This can be done in many ways, and in this section we want to consider a manner o f definition which can be gen- eralized so that the meaning of other notations can be expressed in the same way. One such way to define binary notation is to base a definition on
Gradualness, uncertainty and bipolarity: Making sense of fuzzy sets This paper discusses basic notions underlying fuzzy sets, especially gradualness, uncertainty, vagueness and bipolarity, in order to clarify the significance of using fuzzy sets in practice. Starting with the idea that a fuzzy set may represent either a precise gradual composite entity or an epistemic construction refereeing to an ill-known object, it is shown that each of this view suggests a different use of fuzzy sets. Then, it is argued that the usual phrase fuzzy number is ambiguous as it induces some confusion between gradual extensions of real numbers and gradual extensions of interval calculations. The distinction between degrees of truth that are compositional and degrees of belief that cannot be so is recalled. The truth-functional calculi of various extensions of fuzzy sets, motivated by the desire to handle ill-known membership grades, are shown to be of limited significance for handling this kind of uncertainty. Finally, the idea of a separate handling of membership and non-membership grades put forward by Atanassov is cast in the setting of reasoning about bipolar information. This intuition is different from the representation of ill-known membership functions and leads to combination rules differing from the ones proposed for handling uncertainty about membership grades.
A model of consensus in group decision making under linguistic assessments This paper presents a consensus model in group decision making under linguistic assessments. It is based on the use of linguistic preferences to provide individuals' opinions, and on the use of fuzzy majority of consensus, represented by means of a linguistic quantifier. Several linguistic consensus degrees and linguistic distances are defined, acting on three levels. The consensus degrees indicate how far a group of individuals is from the maximum consensus, and linguistic distances indicate how far each individual is from current consensus labels over the preferences. This consensus model allows to incorporate more human consistency in decision support systems.
Compressive cooperative sensing and mapping in mobile networks In this paper we consider a mobile cooperative network that is tasked with building a map of the spatial variations of a parameter of interest, such as an obstacle map or an aerial map. We propose a new framework that allows the nodes to build a map of the parameter of interest with a small number of measurements. By using the recent results in the area of compressive sensing, we show how the nodes can exploit the sparse representation of the parameter of interest in the transform domain in order to build a map with minimal sensing. The proposed work allows the nodes to efficiently map the areas that are not sensed directly. To illustrate the performance of the proposed framework, we show how the nodes can build an aerial map or a map of obstacles with sparse sensing. We furthermore show how our proposed framework enables a novel non-invasive approach to mapping obstacles by using wireless channel measurements.
User impatience and network performance In this work, we analyze from passive measurements the correlations between the user-induced interruptions of TCP connections and different end-to-end performance metrics. The aim of this study is to assess the possibility for a network operator to take into account the customers' experience for network monitoring. We first observe that the usual connection-level performance metrics of the interrupted connections are not very different, and sometimes better than those of normal connections. However, the request-level performance metrics show stronger correlations between the interruption rates and the network quality-of-service. Furthermore, we show that the user impatience could also be used to characterize the relative sensitivity of data applications to various network performance metrics.
Heden's bound on maximal partial spreads We prove Heden's result that the deficiency δ of a maximal partial spread in PG(3, q ) is greater than 1 + ½ (1+√5)√ q unless δ−1 is a multiple of p , where q=p n . When q is odd and not a square, we are able to improve this lower bound to roughly √3 q .
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.11
0.033333
0.03
0.004444
0.002034
0.000928
0.000143
0
0
0
0
0
0
0
Programmable aperture photography: multiplexed light field acquisition In this paper, we present a system including a novel component called programmable aperture and two associated post-processing algorithms for high-quality light field acquisition. The shape of the programmable aperture can be adjusted and used to capture light field at full sensor resolution through multiple exposures without any additional optics and without moving the camera. High acquisition efficiency is achieved by employing an optimal multiplexing scheme, and quality data is obtained by using the two post-processing algorithms designed for self calibration of photometric distortion and for multi-view depth estimation. View-dependent depth maps thus generated help boost the angular resolution of light field. Various post-exposure photographic effects are given to demonstrate the effectiveness of the system and the quality of the captured light field.
Compressive light transport sensing In this article we propose a new framework for capturing light transport data of a real scene, based on the recently developed theory of compressive sensing. Compressive sensing offers a solid mathematical framework to infer a sparse signal from a limited number of nonadaptive measurements. Besides introducing compressive sensing for fast acquisition of light transport to computer graphics, we develop several innovations that address specific challenges for image-based relighting, and which may have broader implications. We develop a novel hierarchical decoding algorithm that improves reconstruction quality by exploiting interpixel coherency relations. Additionally, we design new nonadaptive illumination patterns that minimize measurement noise and further improve reconstruction quality. We illustrate our framework by capturing detailed high-resolution reflectance fields for image-based relighting.
Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization. Sparse signal representation, analysis, and sensing have received a lot of attention in recent years from the signal processing, optimization, and learning communities. On one hand, learning overcomplete dictionaries that facilitate a sparse representation of the data as a liner combination of a few atoms from such dictionary leads to state-of-the-art results in image and video restoration and classification. On the other hand, the framework of compressed sensing (CS) has shown that sparse signals can be recovered from far less samples than those required by the classical Shannon-Nyquist Theorem. The samples used in CS correspond to linear projections obtained by a sensing projection matrix. It has been shown that, for example, a nonadaptive random sampling matrix satisfies the fundamental theoretical requirements of CS, enjoying the additional benefit of universality. On the other hand, a projection sensing matrix that is optimally designed for a certain class of signals can further improve the reconstruction accuracy or further reduce the necessary number of samples. In this paper, we introduce a framework for the joint design and optimization, from a set of training images, of the nonparametric dictionary and the sensing matrix. We show that this joint optimization outperforms both the use of random sensing matrices and those matrices that are optimized independently of the learning of the dictionary. Particular cases of the proposed framework include the optimization of the sensing matrix for a given dictionary as well as the optimization of the dictionary for a predefined sensing environment. The presentation of the framework and its efficient numerical optimization is complemented with numerous examples on classical image datasets.
Microlocal Analysis of the Geometric Separation Problem Image data are often composed of two or more geometrically distinct constituents; in galaxy catalogs, for instance, one sees a mixture of pointlike structures (galaxy superclusters) and curvelike structures (filaments). It would be ideal to process a single image and extract two geometrically pure images, each one containing features from only one of the two geometric constituents. This seems to be a seriously underdetermined problem but recent empirical work achieved highly persuasive separations. We present a theoretical analysis showing that accurate geometric separation of point and curve singularities can be achieved by minimizing the l1 norm of the representing coefficients in two geometrically complementary frames: wavelets and curvelets. Driving our analysis is a specific property of the ideal (but unachievable) representation where each content type is expanded in the frame best adapted to it. This ideal representation has the property that important coefficients are clustered geometrically in phase space, and that at fine scales, there is very little coherence between a cluster of elements in one frame expansion and individual elements in the complementary frame. We formally introduce notions of cluster coherence and clustered sparsity and use this machinery to show that the underdetermined systems of linear equations can be stably solved by l1 minimization; microlocal phase space helps organize the calculations that cluster coherence requires. (c) 2012 Wiley Periodicals, Inc.
Algorithm 890: Sparco: A Testing Framework for Sparse Reconstruction Sparco is a framework for testing and benchmarking algorithms for sparse reconstruction. It includes a large collection of sparse reconstruction problems drawn from the imaging, compressed sensing, and geophysics literature. Sparco is also a framework for implementing new test problems and can be used as a tool for reproducible research. Sparco is implemented entirely in Matlab, and is released as open-source software under the GNU Public License.
Iterative Hard Thresholding for Compressed Sensing Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper)•It gives near-optimal error guarantees.•It is robust to observation noise.•It succeeds with a minimum number of observations.•It can be used with any sampling operator for which the operator and its adjoint can be computed.•The memory requirement is linear in the problem size.•Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint.•It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal.•Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.
Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ) error term combined with a sparseness-inducing regularization term. Basis pursuit, the least absolute shrinkage and selection operato...
Fuzzy Logic and the Resolution Principle The relationship between fuzzy logic and two-valued logic in the context of the first order predicate calctflus is discussed. It is proved that if every clause in a set of clauses is somethblg more than a "half-truth" and the most reliable clause has truth-value a and the most unreliable clause has truth-value b, then we are guaranteed that all the logical con- sequences obtained by repeatedly applying the resolution principle will have truth-value between a and b. The significance of this theorem is also discussed.
Creating knowledge databases for storing and sharing people knowledge automatically using group decision making and fuzzy ontologies Over the last decade, the Internet has undergone a profound change. Thanks to Web 2.0 technologies, the Internet has become a platform where everybody can participate and provide their own personal information and experiences. Ontologies were designed in an effort to sort and categorize all sorts of information. In this paper, an automatized method for retrieving the subjective Internet users information and creating ontologies is described. Thanks to this method, it is possible to automatically create knowledge databases using the common knowledge of a large amount of people. Using these databases, anybody can consult and benefit from the retrieved information. Group decision making methods are used to extract users information and fuzzy ontologies are employed to store the collected knowledge.
Exact and Approximate Sparse Solutions of Underdetermined Linear Equations In this paper, we empirically investigate the NP-hard problem of finding sparsest solutions to linear equation systems, i.e., solutions with as few nonzeros as possible. This problem has recently received considerable interest in the sparse approximation and signal processing literature. We use a branch-and-cut approach via the maximum feasible subsystem problem to compute optimal solutions for small instances and investigate the uniqueness of the optimal solutions. We furthermore discuss six (modifications of) heuristics for this problem that appear in different parts of the literature. For small instances, the exact optimal solutions allow us to evaluate the quality of the heuristics, while for larger instances we compare their relative performance. One outcome is that the so-called basis pursuit heuristic performs worse, compared to the other methods. Among the best heuristics are a method due to Mangasarian and one due to Chinneck.
Is Gauss Quadrature Better than Clenshaw-Curtis? We compare the convergence behavior of Gauss quadrature with that of its younger brother, Clenshaw-Curtis. Seven-line MATLAB codes are presented that implement both methods, and experiments show that the supposed factor-of-2 advantage of Gauss quadrature is rarely realized. Theorems are given to explain this effect. First, following O'Hara and Smith in the 1960s, the phenomenon is explained as a consequence of aliasing of coefficients in Chebyshev expansions. Then another explanation is offered based on the interpretation of a quadrature formula as a rational approximation of $\log((z+1)/(z-1))$ in the complex plane. Gauss quadrature corresponds to Padé approximation at $z=\infty$. Clenshaw-Curtis quadrature corresponds to an approximation whose order of accuracy at $z=\infty$ is only half as high, but which is nevertheless equally accurate near $[-1,1]$.
Practical RDF schema reasoning with annotated semantic web data Semantic Web data with annotations is becoming available, being YAGO knowledge base a prominent example. In this paper we present an approach to perform the closure of large RDF Schema annotated semantic web data using standard database technology. In particular, we exploit several alternatives to address the problem of computing transitive closure with real fuzzy semantic data extracted from YAGO in the PostgreSQL database management system. We benchmark the several alternatives and compare to classical RDF Schema reasoning, providing the first implementation of annotated RDF schema in persistent storage.
Fuzzy concepts and formal methods: some illustrative examples It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However, some system problems, particularly those drawn from the information systems (IS) problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper identifies the problem types of interest and argues that they are characterised by uncertainty and imprecision. It suggests fuzzy set theory as a useful formalism for modelling aspects of this imprecision. The paper illustrates how a fuzzy logic toolkit for Z can be applied to such problem domains. Several examples are presented illustrating the representation of imprecise concepts as fuzzy sets and relations, and soft pre-conditions and system requirements as a series of linguistically quantified propositions.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.11
0.024
0.006667
0.000476
0.000067
0.000019
0.000002
0
0
0
0
0
0
0
Bootstrap techniques and fuzzy random variables: Synergy in hypothesis testing with fuzzy data In previous studies we have stated that the well-known bootstrap techniques are a valuable tool in testing statistical hypotheses about the means of fuzzy random variables, when these variables are supposed to take on a finite number of different values and these values being fuzzy subsets of the one-dimensional Euclidean space. In this paper we show that the one-sample method of testing about the mean of a fuzzy random variable can be extended to general ones (more precisely, to those whose range is not necessarily finite and whose values are fuzzy subsets of finite-dimensional Euclidean space). This extension is immediately developed by combining some tools in the literature, namely, bootstrap techniques on Banach spaces, a metric between fuzzy sets based on the support function, and an embedding of the space of fuzzy random variables into a Banach space which is based on the support function.
A fuzzy-based methodology for the analysis of diabetic neuropathy A new model for the fuzzy-based analysis of diabetic neuropathy is illustrated, whose pathogenesis so far is not well known. The underlying algebraic structure is a commutative l-monoid, whose support is a set of classifications based on the concept of linguistic variable introduced by Zadeh. The analysis is carried out by means of patient's anagraphical and clinical data, e.g. age, sex, duration of the disease, insulinic needs, severity of diabetes, possible presence of complications. The results obtained by us are identical with medical diagnoses. Moreover, analyzing suitable relevance factors one gets reasonable information about the etiology of the disease, our results agree with most credited clinical hypotheses.
Simulation of fuzzy random variables This work deals with the simulation of fuzzy random variables, which can be used to model various realistic situations, where uncertainty is not only present in form of randomness but also in form of imprecision, described by means of fuzzy sets. Utilizing the common arithmetics in the space of all fuzzy sets only induces a conical structure. As a consequence, it is difficult to directly apply the usual simulation techniques for functional data. In order to overcome this difficulty two different approaches based on the concept of support functions are presented. The first one makes use of techniques for simulating Hilbert space-valued random elements and afterwards projects on the cone of all fuzzy sets. It is shown by empirical results that the practicability of this approach is limited. The second approach imitates the representation of every element of a separable Hilbert space in terms of an orthonormal basis directly on the space of fuzzy sets. In this way, a new approximation of fuzzy sets useful to approximate and simulate fuzzy random variables is developed. This second approach is adequate to model various realistic situations.
A generalized real-valued measure of the inequality associated with a fuzzy random variable Fuzzy random variables have been introduced by Puri and Ralescu as an extension of random sets. In this paper, we first introduce a real-valued generalized measure of the “relative variation” (or inequality) associated with a fuzzy random variable. This measure is inspired in Csiszár's f-divergence, and extends to fuzzy random variables many well-known inequality indices. To guarantee certain relevant properties of this measure, we have to distinguish two main families of measures which will be characterized. Then, the fundamental properties are derived, and an outstanding measure in each family is separately examined on the basis of an additive decomposition property and an additive decomposability one. Finally, two examples illustrate the application of the study in this paper.
Two-sample hypothesis tests of means of a fuzzy random variable In this paper we will consider some two-sample hypothesis tests for means concerning a fuzzy random variable in two populations. For this purpose, we will make use of a generalized metric for fuzzy numbers, and we will develop an exact study for the case of normal fuzzy random variables and an asymptotic study for the case of simple general fuzzy random variables.
Joint propagation of probability and possibility in risk analysis: Towards a formal framework This paper discusses some models of Imprecise Probability Theory obtained by propagating uncertainty in risk analysis when some input parameters are stochastic and perfectly observable, while others are either random or deterministic, but the information about them is partial and is represented by possibility distributions. Our knowledge about the probability of events pertaining to the output of some function of interest from the risk analysis model can be either represented by a fuzzy probability or by a probability interval. It is shown that this interval is the average cut of the fuzzy probability of the event, thus legitimating the propagation method. Besides, several independence assumptions underlying the joint probability-possibility propagation methods are discussed and illustrated by a motivating example.
Toward developing agility evaluation of mass customization systems using 2-tuple linguistic computing Mass customization (MC) relates to the ability to provide individually designed products and services to every customer through high process flexibility and integration. For responding to the mass customization trend it is necessary to develop an agility-based manufacturing system to catch on the traits involved in MC. An MC manufacturing agility evaluation approach based on concepts of TOPSIS is proposed through analyzing the agility of organization management, product design, processing manufacture, partnership formation capability and integration of information system. The 2-tuple fuzzy linguistic computing manner to transform the heterogeneous information assessed by multiple experts into an identical decision domain is inherent in the proposed method. It is expected to aggregate experts' heterogeneous information, and offer sufficient and conclusive information for evaluating the agile manufacturing alternatives. And then a suitable agile system for implementing MC can be established.
A hybrid recommender system for the selective dissemination of research resources in a Technology Transfer Office Recommender systems could be used to help users in their access processes to relevant information. Hybrid recommender systems represent a promising solution for multiple applications. In this paper we propose a hybrid fuzzy linguistic recommender system to help the Technology Transfer Office staff in the dissemination of research resources interesting for the users. The system recommends users both specialized and complementary research resources and additionally, it discovers potential collaboration possibilities in order to form multidisciplinary working groups. Thus, this system becomes an application that can be used to help the Technology Transfer Office staff to selectively disseminate the research knowledge and to increase its information discovering properties and personalization capacities in an academic environment.
Consistent models of transitivity for reciprocal preferences on a finite ordinal scale In this paper we consider a decision maker who shows his/her preferences for different alternatives through a finite set of ordinal values. We analyze the problem of consistency taking into account some transitivity properties within this framework. These properties are based on the very general class of conjunctors on the set of ordinal values. Each reciprocal preference relation on a finite ordinal scale has both a crisp preference and a crisp indifference relation associated to it in a natural way. Taking this into account, we have started by analyzing the problem of propagating transitivity from the preference relation on a finite ordinal scale to the crisp preference and indifference relations. After that, we carried out the analysis in the opposite direction. We provide some necessary and sufficient conditions for that propagation, and therefore, we characterize the consistent class of conjunctors in each direction.
Dummynet: a simple approach to the evaluation of network protocols Network protocols are usually tested in operational networks or in simulated environments. With the former approach it is not easy to set and control the various operational parameters such as bandwidth, delays, queue sizes. Simulators are easier to control, but they are often only an approximate model of the desired setting, especially for what regards the various traffic generators (both producers and consumers) and their interaction with the protocol itself.In this paper we show how a simple, yet flexible and accurate network simulator - dummynet - can be built with minimal modifications to an existing protocol stack, allowing experiments to be run on a standalone system. dummynet works by intercepting communications of the protocol layer under test and simulating the effects of finite queues, bandwidth limitations and communication delays. It runs in a fully operational system, hence allowing the use of real traffic generators and protocol implementations, while solving the problem of simulating unusual environments. With our tool, doing experiments with network protocols is as simple as running the desired set of applications on a workstation.A FreeBSD implementation of dummynet, targeted to TCP, is available from the author. This implementation is highly portable and compatible with other BSD-derived systems, and takes less than 300 lines of kernel code.
The impact on retrieval effectiveness of skewed frequency distributions We present an analysis of word senses that provides a fresh insight into the impact of word ambiguity on retrieval effectiveness with potential broader implications for other processes of information retrieval. Using a methodology of forming artifically ambiguous words, known as pseudowords, and through reference to other researchers' work, the analysis illustrates that the distribution of the frequency of occurrance of the senses of a word plays a strong role in ambiguity's impact of effectiveness. Further investigation shows that this analysis may also be applicable to other processes of retrieval, such as Cross Language Information Retrieval, query expansion, retrieval of OCR'ed texts, and stemming. The analysis appears to provide a means of explaining, at least in part, reasons for the processes' impact (or lack of it) on effectiveness.
Bayesian inference with optimal maps We present a new approach to Bayesian inference that entirely avoids Markov chain simulation, by constructing a map that pushes forward the prior measure to the posterior measure. Existence and uniqueness of a suitable measure-preserving map is established by formulating the problem in the context of optimal transport theory. We discuss various means of explicitly parameterizing the map and computing it efficiently through solution of an optimization problem, exploiting gradient information from the forward model when possible. The resulting algorithm overcomes many of the computational bottlenecks associated with Markov chain Monte Carlo. Advantages of a map-based representation of the posterior include analytical expressions for posterior moments and the ability to generate arbitrary numbers of independent posterior samples without additional likelihood evaluations or forward solves. The optimization approach also provides clear convergence criteria for posterior approximation and facilitates model selection through automatic evaluation of the marginal likelihood. We demonstrate the accuracy and efficiency of the approach on nonlinear inverse problems of varying dimension, involving the inference of parameters appearing in ordinary and partial differential equations.
Robust simulation methodology for surface-roughness loss in interconnect and package modelings In multigigahertz integrated-circuit design, the extra energy loss caused by conductor surface roughness in metallic interconnects and packagings is more evident than ever before and demands explicit consideration for accurate prediction of signal integrity and energy consumption. Existing techniques based on analytical approximation, despite simple formulations, suffer from restrictive valid ranges, namely, either small or large roughness/frequencies. In this paper, we propose a robust and efficient numerical-simulation methodology applicable to evaluating general surface roughness, described by parameterized stochastic processes, across a wide frequency band. Traditional computation-intensive electromagnetic simulation is avoided via a tailored scalar-wave modeling to capture the power loss due to surface roughness. The spectral stochastic collocation method is applied to construct the complete statistical model. Comparisons with full wave simulation as well as existing methods in their respective valid ranges then verify the effectiveness of the proposed approach.
Split Bregman iterative algorithm for sparse reconstruction of electrical impedance tomography In this paper, we present an evaluation of the use of split Bregman iterative algorithm for the L"1-norm regularized inverse problem of electrical impedance tomography. Simulations are performed to validate that our algorithm is competitive in terms of the imaging quality and computational speed in comparison with several state-of-the-art algorithms. Results also indicate that in contrast to the conventional L"2-norm regularization method and total variation (TV) regularization method, the L"1-norm regularization method can sharpen the edges and is more robust against data noises.
1.016437
0.021262
0.020568
0.018599
0.012988
0.007177
0.000145
0.000007
0.000001
0
0
0
0
0
An improved logical effort model and framework applied to optimal sizing of circuits operating in multiple supply voltage regimes Digital near-threshold logic circuits have recently been proposed for applications in the ultra-low power end of the design spectrum, where the performance is of secondary importance. However, the characteristics of MOS transistors operating in the near-threshold region are very different from those in the strong-inversion region. This paper first derives the logical effort and parasitic delay values for logic gates in multiple voltage (sub/near/super-threshold) regimes based on the transregional model. The transregional model shows higher accuracy for both sub- and near-threshold regions compared with the subthreshold model. Furthermore, the derived near-threshold logical effort method is subsequently used for delay optimization of circuits operating in both near- and super-threshold regimes. In order to achieve this goal, a joint optimization of transistor sizing and adaptive body biasing is proposed and optimally solved using geometric programming. Experimental results show that our improved logical effort-based optimization framework provides a performance improvement of up to 40.1% over the conventional logical effort method.
Joint sizing and adaptive independent gate control for FinFET circuits operating in multiple voltage regimes using the logical effort method FinFET has been proposed as an alternative for bulk CMOS in current and future technology nodes due to more effective channel control, reduced random dopant fluctuation, high ON/OFF current ratio, lower energy consumption, etc. Key characteristics of FinFET operating in the sub/near-threshold region are very different from those in the strong-inversion region. This paper first introduces an analytical transregional FinFET model with high accuracy in both sub- and near-threshold regimes. Next, the paper extends the well-known and widely-adopted logical effort delay calculation and optimization method to FinFET circuits operating in multiple voltage (sub/near/super-threshold) regimes. More specifically, a joint optimization of gate sizing and adaptive independent gate control is presented and solved in order to minimize the delay of FinFET circuits operating in multiple voltage regimes. Experimental results on a 32nm Predictive Technology Model for FinFET demonstrate the effectiveness of the proposed logical effort-based delay optimization framework.
Fast and accurate statistical characterization of standard cell libraries With devices entering the nanometer scale process-induced variations, intrinsic variations and reliability issues impose new challenges for the electronic design automation industry. Design automation tools must keep the pace of technology and keep predicting accurately and efficiently the high-level design metrics such as delay and power. Although it is the most time consuming, Monte Carlo is still the simplest and most employed technique for simulating the impact of process variability at circuit level. This work addresses the problem of efficient alternatives for Monte Carlo for modeling circuit characteristics under statistical variability. This work employs the error propagation technique and Response Surface Methodology for substituting Monte Carlo simulations for library characterization.
Calculation of Generalized Polynomial-Chaos Basis Functions and Gauss Quadrature Rules in Hierarchical Uncertainty Quantification Stochastic spectral methods are efficient techniques for uncertainty quantification. Recently they have shown excellent performance in the statistical analysis of integrated circuits. In stochastic spectral methods, one needs to determine a set of orthonormal polynomials and a proper numerical quadrature rule. The former are used as the basis functions in a generalized polynomial chaos expansion. The latter is used to compute the integrals involved in stochastic spectral methods. Obtaining such information requires knowing the density function of the random input a-priori. However, individual system components are often described by surrogate models rather than density functions. In order to apply stochastic spectral methods in hierarchical uncertainty quantification, we first propose to construct physically consistent closed-form density functions by two monotone interpolation schemes. Then, by exploiting the special forms of the obtained density functions, we determine the generalized polynomial-chaos basis functions and the Gauss quadrature rules that are required by a stochastic spectral simulator. The effectiveness of our proposed algorithm is verified by both synthetic and practical circuit examples.
Statistical modeling with the virtual source MOSFET model A statistical extension of the ultra-compact Virtual Source (VS) MOSFET model is developed here for the first time. The characterization uses a statistical extraction technique based on the backward propagation of variance (BPV) with variability parameters derived directly from the nominal VS model. The resulting statistical VS model is extensively validated using Monte Carlo simulations, and the statistical distributions of several figures of merit for logic and memory cells are compared with those of a BSIM model from a 40-nm CMOS industrial design kit. The comparisons show almost identical distributions with distinct run time advantages for the statistical VS model. Additional simulations show that the statistical VS model accurately captures non-Gaussian features that are important for low-power designs.
Model reduction of time-varying linear systems using approximate multipoint Krylov-subspace projectors h this paper a method is presented for model reduction of systemsdescribedby time-var-ying differential-algebraicequations. ~i method aUowsautomated extraction of reduced modek for nordinearM blocks,such as mixers and ~ters, that havea near-Enear signalpath but may containstronglynordinear time-varyingcomponents. me modek have the accuracy of a transistor-levelnordinearsimulationbut are very compact and so can be used in system-levelsimulationand design. me modelreductionprocedureis basedona multipointrational approximationalgorithm formed by orthogonalprojection of the originaltime-varyingHnearsystemintoan approximate@lov subspace. me modek obtainedfrom the approximate fiylovsubspaceprojector an be obtainedmuch more easilythan the exact projectors but shownegligibledifferencein accuracy.
Joint Design-Time and Post-Silicon Minimization of Parametric Yield Loss using Adjustable Robust Optimization Parametric yield loss due to variability can be effectively reduced by both design-time optimization strategies and by adjusting circuit parameters to the realizations of variable parameters. The two levels of tuning operate within a single variability budget, and because their effectiveness depends on the magnitude and the spatial structure of variability their joint co-optimization is required. In this paper we develop a formal optimization algorithm for such co-optimization and link it to the control and measurement overhead via the formal notions of measurement and control complexity. We describe an optimization strategy that unifies design-time gate-level sizing and post-silicon adaptation using adaptive body bias at the chip level. The statistical formulation utilizes adjustable robust linear programming to derive the optimal policy for assigning body bias once the uncertain variables, such as gate length and threshold voltage, are known. Computational tractability is achieved by restricting optimal body bias selection policy to be an affine function of uncertain variables. We demonstrate good run-time and show that 5-35% savings in leakage power across the benchmark circuits are possible. Dependence of results on measurement and control complexity is studied and points of diminishing returns for both metrics are identified
Modeling the Driver Load in the Presence of Process Variations Feature sizes of less than 90 nm and clock frequencies higher than 3 GHz calls for fundamental changes in driver-load models. New driver-load models must consider the process variation impact of the manufacturing procedure, the nonlinear behavior of the drivers, the inductance effects of the loads, and the slew rates of the output waveforms. The present deterministic driver-load models use the conventional deterministic driver-delay model with a single Ceff (one ramp) approach. Neither the statistical property of the driver nor the inductance effects of the interconnect are taken into consideration. Therefore, the accuracy of existing models is questionable. This paper introduces a new driver-load model that predicts the driver-delay changes in the presence of process variations and represents the interconnect load as a distributed resistance, inductance and capacitance (RLC) network. The employed orthogonal polynomial-based probabilistic collocation method (PCM) constructs a driver-delay analytical equation from the circuit's output response. The obtained analytical equation is used to evaluate the driver output delay distribution. In addition, the load is modeled as a two-effective-capacitance in order to capture the nonlinear behavior of the driver. The lossy transmission line approach accounts for the impact of the inductance when modeling the driving-point interconnect load. The new model shows improvements of 9% in the average delay error and 2.2% in the slew rate error over the simulation program with integrated circuit emphasis (SPICE) and the one ramp modeling approaches. Compared with the Monte Carlo method, the proposed model demonstrates a less than 3% error in the expected gate delay value and a less 5% error in the gate delay variance
Efficient Reduced-Order Modeling of Frequency-Dependent Coupling Inductances associated with 3-D Interconnect Structures Since the first papers on asymptotic waveform evaluation (AWE), reduced order models have become standard for improving interconnect simulation efficiency, and very recent work has demonstrated that bi-orthogonalization algorithms can be used to robustly generate AWE-style macromodels. In this paper we describe using block Arnoldi-based orthogonalization methods to generate reduced order models from FastHenry, a multipole-accelerated three dimensional inductance extraction program. Examples are analyzed to demonstrate the efficiency and accuracy of the block Arnoldi algorithm.
A machine learning approach to coreference resolution of noun phrases In this paper, we present a learning approach to coreference resolution of noun phrases in unrestricted text. The approach learns from a small, annotated corpus and the task includes resolving not just a certain type of noun phrase (e.g., pronouns) but rather general noun phrases. It also does not restrict the entity types of the noun phrases; that is, coreference is assigned whether they are of "organization," "person," or other types. We evaluate our approach on common data sets (namely, the MUC-6 and MUC-7 coreference corpora) and obtain encouraging results, in-dicating that on the general noun phrase coreference task, the learning approach holds promise and achieves accuracy comparable to that of nonlearning approaches. Our system is the first learning-based system that offers performance comparable to that of state-of-the-art nonlearning systems on these data sets.
Error correction via linear programming Suppose we wish to transmit a vector f 2 Rn reliably. A frequently discussed approach consists in encoding f with an m by n coding matrix A. Assume now that a fraction of the entries of Af are corrupted in a completely arbitrary fashion. We do not know which entries are aected nor do we know how they are aected. Is it possible to recover f exactly from the corrupted m-dimensional vector y? This paper proves that under suitable conditions on the coding matrix A, the input f is the unique solution to the '1-minimization problem (kxk'1 := P i |xi|)
QoE-based transport optimization for video delivery over next generation cellular networks Video streaming is considered as one of the most important and challenging applications for next generation cellular networks. Current infrastructures are not prepared to deal with the increasing amount of video traffic. The current Internet, and in particular the mobile Internet, was not designed with video requirements in mind and, as a consequence, its architecture is very inefficient for handling video traffic. Enhancements are needed to cater for improved Quality of Experience (QoE) and improved reliability in a mobile network. In this paper we design a novel dynamic transport architecture for next generation mobile networks adapted to video service requirements. Its main novelty is the transport optimization of video delivery that is achieved through a QoE oriented redesign of networking mechanisms as well as the integration of Content Delivery Networks (CDN) techniques.
Good random matrices over finite fields. The random matrix uniformly distributed over the set of all m-by-n matrices over a finite field plays an important role in many branches of information theory. In this paper a generalization of this random matrix, called k-good random matrices, is studied. It is shown that a k-good random m-by-n matrix with a distribution of minimum support size is uniformly distributed over a maximum-rank-distance (MRD) code of minimum rank distance min{m, n} - k + 1, and vice versa. Further examples of k-good random matrices are derived from homogeneous weights on matrix modules. Several applications of k-good random matrices are given, establishing links with some well-known combinatorial problems. Finally, the related combinatorial concept of a k-dense set of m-by-n matrices is studied, identifying such sets as blocking sets with respect to (m - k)- dimensional flats in a certain m-by-n matrix geometry and determining their minimum size in special cases.
The laws of large numbers for fuzzy random variables The new attempt of weak and strong law of large numbers for fuzzy random variables is discussed in this paper by proposing the convergence in probability and convergence with probability one for fuzzy random variables. We first consider the limit properties of fuzzy numbers by invoking the Hausdorff metric, and then we extend it to the convergence in probability and convergence with probability one for fuzzy random variables. We provide the notion of weak and strong convergence in probability and weak and strong convergence with probability one for fuzzy random variables. Finally we come up with the weak and strong law of large numbers for fuzzy random variables in weak and strong sense. (C) 2000 Elsevier Science B.V. All rights reserved.
1.2224
0.2224
0.2224
0.0556
0.040013
0.002861
0.000516
0.000067
0.000001
0
0
0
0
0
SDNHAS: An SDN-Enabled Architecture to Optimize QoE in HTTP Adaptive Streaming. HTTP adaptive streaming (HAS) is receiving much attention from both industry and academia as it has become the de facto approach to stream media content over the Internet. Recently, we proposed a streaming architecture called SDNDASH [1] to address HAS scalability issues including video instability, quality of experience (QoE) unfairness, and network resource underutilization, while maximizing per player QoE. While SDNDASH was a significant step forward, there were three unresolved limitations: 1) it did not scale well when the number of HAS players increased; 2) it generated communication overhead; and 3) it did not address client heterogeneity. These limitations could result in suboptimal decisions that led to viewer dissatisfaction. To that effect, we propose an enhanced intelligent streaming architecture, called SDNHAS, which leverages software defined networking (SDN) capabilities of assisting HAS players in making better adaptation decisions. This architecture accommodates large-scale deployments through a cluster-based mechanism, reduces communication overhead between the HAS players and SDN core, and allocates the network resources effectively in the presence of short- and long-term changes in the network.
An SDN Architecture for Privacy-Friendly Network-Assisted DASH Dynamic Adaptive Streaming over HTTP (DASH) is the premier technology for Internet video streaming. DASH efficiently uses existing HTTP-based delivery infrastructures implementing adaptive streaming. However, DASH traffic is bursty in nature. This causes performance problems when DASH players share a network connection or in networks with heavy background traffic. The result is unstable and lower quality video. In this article, we present the design and implementation of a so-called DASH Assisting Network Element (DANE). Our system provides target bitrate signaling and dynamic traffic control. These two mechanisms realize proper bandwidth sharing among clients. Our system is privacy friendly and fully supports encrypted video streams. Trying to improve the streaming experience for users who share a network connection, our system increases the video bitrate and reduces the number of quality switches. We show this through evaluations in our Wi-Fi testbed.
Quality of Experience-Centric Management of Adaptive Video Streaming Services: Status and Challenges. Video streaming applications currently dominate Internet traffic. Particularly, HTTP Adaptive Streaming (HAS) has emerged as the dominant standard for streaming videos over the best-effort Internet, thanks to its capability of matching the video quality to the available network resources. In HAS, the video client is equipped with a heuristic that dynamically decides the most suitable quality to stream the content, based on information such as the perceived network bandwidth or the video player buffer status. The goal of this heuristic is to optimize the quality as perceived by the user, the so-called Quality of Experience (QoE). Despite the many advantages brought by the adaptive streaming principle, optimizing users’ QoE is far from trivial. Current heuristics are still suboptimal when sudden bandwidth drops occur, especially in wireless environments, thus leading to freezes in the video playout, the main factor influencing users’ QoE. This issue is aggravated in case of live events, where the player buffer has to be kept as small as possible in order to reduce the playout delay between the user and the live signal. In light of the above, in recent years, several works have been proposed with the aim of extending the classical purely client-based structure of adaptive video streaming, in order to fully optimize users’ QoE. In this article, a survey is presented of research works on this topic together with a classification based on where the optimization takes place. This classification goes beyond client-based heuristics to investigate the usage of server- and network-assisted architectures and of new application and transport layer protocols. In addition, we outline the major challenges currently arising in the field of multimedia delivery, which are going to be of extreme relevance in future years.
Adaptive Bitrate Selection: A Survey. HTTP adaptive streaming (HAS) is the most recent attempt regarding video quality adaptation. It enables cheap and easy to implement streaming technology without the need for a dedicated infrastructure. By using a combination of TCP and HTTP it has the advantage of reusing all the existing technologies designed for ordinary web. Equally important is that HAS traffic passes through firewalls and wor...
TAPAS: A Tool for rApid Prototyping of Adaptive Streaming algorithms The central component of any adaptive video streaming system is the stream-switching controller. This paper introduces TAPAS, an open-source Tool for rApid Prototyping of Adaptive Streaming control algorithms. TAPAS is a flexible and extensible video streaming client written in python that allows to easily design and carry out experimental performance evaluations of adaptive streaming controllers without needing to write the code to download video segments, parse manifest files, and decode the video. TAPAS currently supports DASH and HLS and has been designed to minimize the CPU and memory footprint so that experiments involving a large number of concurrent video flows can be carried out using a single client machine. An adaptive streaming controller is implemented to illustrate the simplicity of the tool along with a performance evaluation which validates the tool.
The MPEG-DASH Standard for Multimedia Streaming Over the Internet Editor's NoteMPEG has recently finalized a new standard to enable dynamic and adaptive streaming of media over HTTP. This standard aims to address the interoperability needs between devices and servers of various vendors. There is broad industry support for this new standard, which offers the promise of transforming the media-streaming landscape.—Anthony Vetro
The algebra of fuzzy truth values The purpose of this paper is to give a straightforward mathematical treatment of algebras of fuzzy truth values for type-2 fuzzy sets.
Internet of Things (IoT): A vision, architectural elements, and future directions Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fueled by the recent adaptation of a variety of enabling wireless technologies such as RFID tags and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a Cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A Cloud implementation using Aneka, which is based on interaction of private and public Clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.
Karhunen-Loève approximation of random fields by generalized fast multipole methods KL approximation of a possibly instationary random field a(ω, x) ∈ L2(Ω,dP; L∞(D)) subject to prescribed meanfield Ea(x) = ∫Ω, a (ω x) dP(ω) and covariance Va(x,x') = ∫Ω(a(ω, x) - Ea(x))(a(ω, x') - Ea(x')) dP(ω) in a polyhedral domain D ⊂ Rd is analyzed. We show how for stationary covariances Va(x,x') = ga(|x - x'|) with ga(z) analytic outside of z = 0, an M-term approximate KL-expansion aM(ω, x) of a(ω, x) can be computed in log-linear complexity. The approach applies in arbitrary domains D and for nonseparable covariances Ca. It involves Galerkin approximation of the KL eigenvalue problem by discontinuous finite elements of degree p ≥ 0 on a quasiuniform, possibly unstructured mesh of width h in D, plus a generalized fast multipole accelerated Krylov-Eigensolver. The approximate KL-expansion aM(X, ω) of a(x, ω) has accuracy O(exp(-bM1/d)) if ga is analytic at z = 0 and accuracy O(M-k/d) if ga is Ck at zero. It is obtained in O(MN(logN)b) operations where N = O(h-d).
Exact and Approximate Sparse Solutions of Underdetermined Linear Equations In this paper, we empirically investigate the NP-hard problem of finding sparsest solutions to linear equation systems, i.e., solutions with as few nonzeros as possible. This problem has recently received considerable interest in the sparse approximation and signal processing literature. We use a branch-and-cut approach via the maximum feasible subsystem problem to compute optimal solutions for small instances and investigate the uniqueness of the optimal solutions. We furthermore discuss six (modifications of) heuristics for this problem that appear in different parts of the literature. For small instances, the exact optimal solutions allow us to evaluate the quality of the heuristics, while for larger instances we compare their relative performance. One outcome is that the so-called basis pursuit heuristic performs worse, compared to the other methods. Among the best heuristics are a method due to Mangasarian and one due to Chinneck.
Restricted Eigenvalue Properties for Correlated Gaussian Designs Methods based on l1-relaxation, such as basis pursuit and the Lasso, are very popular for sparse regression in high dimensions. The conditions for success of these methods are now well-understood: (1) exact recovery in the noiseless setting is possible if and only if the design matrix X satisfies the restricted nullspace property, and (2) the squared l2-error of a Lasso estimate decays at the minimax optimal rate k log p / n, where k is the sparsity of the p-dimensional regression problem with additive Gaussian noise, whenever the design satisfies a restricted eigenvalue condition. The key issue is thus to determine when the design matrix X satisfies these desirable properties. Thus far, there have been numerous results showing that the restricted isometry property, which implies both the restricted nullspace and eigenvalue conditions, is satisfied when all entries of X are independent and identically distributed (i.i.d.), or the rows are unitary. This paper proves directly that the restricted nullspace and eigenvalue conditions hold with high probability for quite general classes of Gaussian matrices for which the predictors may be highly dependent, and hence restricted isometry conditions can be violated with high probability. In this way, our results extend the attractive theoretical guarantees on l1-relaxations to a much broader class of problems than the case of completely independent or unitary designs.
A new hybrid artificial neural networks and fuzzy regression model for time series forecasting Quantitative methods have nowadays become very important tools for forecasting purposes in financial markets as for improved decisions and investments. Forecasting accuracy is one of the most important factors involved in selecting a forecasting method; hence, never has research directed at improving upon the effectiveness of time series models stopped. Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of forecasting problems with a high degree of accuracy. However, ANNs need a large amount of historical data in order to yield accurate results. In a real world situation and in financial markets specifically, the environment is full of uncertainties and changes occur rapidly; thus, future situations must be usually forecasted using the scant data made available over a short span of time. Therefore, forecasting in these situations requires methods that work efficiently with incomplete data. Although fuzzy forecasting methods are suitable for incomplete data situations, their performance is not always satisfactory. In this paper, based on the basic concepts of ANNs and fuzzy regression models, a new hybrid method is proposed that yields more accurate results with incomplete data sets. In our proposed model, the advantages of ANNs and fuzzy regression are combined to overcome the limitations in both ANNs and fuzzy regression models. The empirical results of financial market forecasting indicate that the proposed model can be an effective way of improving forecasting accuracy.
Spectral Methods for Parameterized Matrix Equations. We apply polynomial approximation methods-known in the numerical PDEs context as spectral methods-to approximate the vector-valued function that satisfies a linear system of equations where the matrix and the right-hand side depend on a parameter. We derive both an interpolatory pseudospectral method and a residual-minimizing Galerkin method, and we show how each can be interpreted as solving a truncated infinite system of equations; the difference between the two methods lies in where the truncation occurs. Using classical theory, we derive asymptotic error estimates related to the region of analyticity of the solution, and we present a practical residual error estimate. We verify the results with two numerical examples.
Fuzzy management of user actions during hypermedia navigation The recent dramatic advances in the field of multimedia systems has made pacticable the development of an Intelligent Tutoring Multimedia (ITM). In these systems are present hypertextual structures that belongs to the class hypermedia systems. ITM development involves the definition of a suitable navigation model in addition to the other modules of an Intelligent Tutoring System (ITS), i.e. Database module, User module, Interface module, Teaching module. The navigation module receives as inputs the state of the system and the user's current assessment and tries to optimize the fruition of the knowledge base. Moreover, this module is responsible for managing the effects of disorientation and cognitive overhead. In this paper we deal essentially with four topics: 1.(i) to define a fuzzy-based user model able to manage adequately the user's cognitive state, the orientation, and the cognitive overhead;2.(ii) to introduce fuzzy tools within the navigation module in order to carry out moves on the grounds of meaningful data;3.(iii) to define a set of functions that can dynamically infer new states concerning user's interests;4.(iv) to classify the hypermedia actions according to their semantics.
1.04
0.04
0.04
0.02
0.004444
0.000466
0
0
0
0
0
0
0
0
Incoherent dictionaries and the statistical restricted isometry property In this paper we formulate and prove a statistical version of the restricted isometry property (SRIP for short) which holds in general for any incoherent dictionary D which is a disjoint union of orthonormal bases. In addition, we prove that, under appropriate normalization, the spectral distrib- ution of the associated Gram operator converges in probability to the Sato-Tate (also called semi-circle) distribution. The result is then applied to various dic- tionaries that arise naturally in the setting of …nite harmonic analysis, giving, in particular, a better understanding on a conjecture of Calderbank concerning RIP for the Heisenberg dictionary of chirp like functions.
Deterministic Designs with Deterministic Guarantees: Toeplitz Compressed Sensing Matrices, Sequence Designs and System Identification In this paper we present a new family of discrete sequences having "random like" uniformly decaying auto-correlation properties. The new class of infinite length sequences are higher order chirps constructed using irrational numbers. Exploiting results from the theory of continued fractions and diophantine approximations, we show that the class of sequences so formed has the property that the worst-case auto-correlation coefficients for every finite length sequence decays at a polynomial rate. These sequences display doppler immunity as well. We also show that Toeplitz matrices formed from such sequences satisfy restricted-isometry-property (RIP), a concept that has played a central role recently in Compressed Sensing applications. Compressed sensing has conventionally dealt with sensing matrices with arbitrary components. Nevertheless, such arbitrary sensing matrices are not appropriate for linear system identification and one must employ Toeplitz structured sensing matrices. Linear system identification plays a central role in a wide variety of applications such as channel estimation for multipath wireless systems as well as control system applications. Toeplitz matrices are also desirable on account of their filtering structure, which allows for fast implementation together with reduced storage requirements.
Compressive Sensing Using Low Density Frames We consider the compressive sensing of a sparse or compressible signal x 2 RM . We explicitly construct a class of measurement matrices, referred to as the low density frames, and develop decoding algorithms that produce an accurate estimate ^x even in the presence of additive noise. Low density frames are sparse matrices and have small storage require- ments. Our decoding algorithms for these frames have O(M) complexity. Simulation results are provided, demonstrating that our approach significantly outperforms state-of-the-art recovery algorithms for numerous cases of interest. In particular, for Gaussian sparse signals and Gaussian noise, we are within 2 dB range of the theoretical lower bound in most cases.
Efficient and Robust Compressed Sensing using High-Quality Expander Graphs Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any $n$-dimensional vector that is $k$-sparse (with $k\ll n$) can be fully recovered using $O(k\log\frac{n}{k})$ measurements and only $O(k\log n)$ simple recovery iterations. In this paper we improve upon this result by considering expander graphs with expansion coefficient beyond 3/4 and show that, with the same number of measurements, only $O(k)$ recovery iterations are required, which is a significant improvement when $n$ is large. In fact, full recovery can be accomplished by at most $2k$ very simple iterations. The number of iterations can be made arbitrarily close to $k$, and the recovery algorithm can be implemented very efficiently using a simple binary search tree. We also show that by tolerating a small penalty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. We compare our result with other recently developed expander-graph-based methods and argue that it compares favorably both in terms of the number of required measurements and in terms of the recovery time complexity. Finally we will show how our analysis extends to give a robust algorithm that finds the position and sign of the $k$ significant elements of an almost $k$-sparse signal and then, using very simple optimization techniques, finds in sublinear time a $k$-sparse signal which approximates the original signal with very high precision.
Combining geometry and combinatorics: a unified approach to sparse signal recovery Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix � and then uses linear programming,to decode information about x from �x. The com- binatorial approach constructs � and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of high-quality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the ℓp norm for p � 1, and then show that unbalanced expanders are essentially equivalent to RIP-p matrices. From known deterministic constructions for such matrices, we obtain new deterministic mea- surement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance.
Subspace pursuit for compressive sensing signal reconstruction We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques when applied to very sparse signals, and reconstruction accuracy of the same order as that of linear programming (LP) optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean-squared error of the reconstruction is upper-bounded by constant multiples of the measurement and signal perturbation energies.
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f∈CN and a randomly chosen set of frequencies Ω. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=στ∈Tf(τ)δ(t-τ) obeying |T|≤CM·(log N)-1 · |Ω| for some constant CM0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N-M), f can be reconstructed exactly as the solution to the ℓ1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for CM which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|·logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N-M) would in general require a number of frequency samples at least proportional to |T|·logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
Model-Based Compressive Sensing Compressive sensing (CS) is an alternative to Shannon/Nyquist sampling for the acquisition of sparse or compressible signals that can be well approximated by just K ¿ N elements from an N -dimensional basis. Instead of taking periodic samples, CS measures inner products with M < N random vectors and then recovers the signal via a sparsity-seeking optimization or greedy algorithm. Standard CS dictates that robust signal recovery is possible from M = O(K log(N/K)) measurements. It is possible to substantially decrease M without sacrificing robustness by leveraging more realistic signal models that go beyond simple sparsity and compressibility by including structural dependencies between the values and locations of the signal coefficients. This paper introduces a model-based CS theory that parallels the conventional theory and provides concrete guidelines on how to create model-based recovery algorithms with provable performance guarantees. A highlight is the introduction of a new class of structured compressible signals along with a new sufficient condition for robust structured compressible signal recovery that we dub the restricted amplification property, which is the natural counterpart to the restricted isometry property of conventional CS. Two examples integrate two relevant signal models-wavelet trees and block sparsity-into two state-of-the-art CS recovery algorithms and prove that they offer robust recovery from just M = O(K) measurements. Extensive numerical simulations demonstrate the validity and applicability of our new theory and algorithms.
Quantization of Sparse Representations Compressive sensing (CS) is a new signal acquisition technique for sparse and com- pressible signals. Rather than uniformly sampling the signal, CS computes inner products with randomized basis functions; the signal is then recovered by a convex optimization. Random CS measurements are universal in the sense that the same acquisition system is sufficient for signals sparse in any representation. This paper examines the quantization of strictly sparse, power-limited signals and concludes that CS with scalar quantization uses its allocated rate inefficiently.
Cubature Kalman Filters In this paper, we present a new nonlinear filter for high-dimensional state estimation, which we have named the cubature Kalman filter (CKF). The heart of the CKF is a spherical-radial cubature rule, which makes it possible to numerically compute multivariate moment integrals encountered in the nonlinear Bayesian filter. Specifically, we derive a third-degree spherical-radial cubature rule that provides a set of cubature points scaling linearly with the state-vector dimension. The CKF may therefore provide a systematic solution for high-dimensional nonlinear filtering problems. The paper also includes the derivation of a square-root version of the CKF for improved numerical stability. The CKF is tested experimentally in two nonlinear state estimation problems. In the first problem, the proposed cubature rule is used to compute the second-order statistics of a nonlinearly transformed Gaussian random variable. The second problem addresses the use of the CKF for tracking a maneuvering aircraft. The results of both experiments demonstrate the improved performance of the CKF over conventional nonlinear filters.
A parallel hashed oct-tree N-body algorithm The authors report on an efficient adaptive N-body method which we have recently designed and implemented. The algorithm computes the forces on an arbitrary distribution of bodies in a time which scales as N log N with the particle number. The accuracy of the force calculations is analytically bounded, and can be adjusted via a user defined parameter between a few percent relative accuracy, down to machine arithmetic accuracy. Instead of using pointers to indicate the topology of the tree, the authors identify each possible cell with a key. The mapping of keys into memory locations is achieved via a hash table. This allows the program to access data in an efficient manner across multiple processors. Performance of the parallel program is measured on the 512 processor Intel Touchstone Delta system. Comments on a number of wide-ranging applications which can benefit from application of this type of algorithm are included.
RMIT3DV: Pre-announcement of a creative commons uncompressed HD 3D video database There has been much recent interest, both from industry and research communities, in 3D video technologies and processing techniques. However, with the standardisation of 3D video coding well underway and researchers studying 3D multimedia delivery and users' quality of multimedia experience in 3D video environments, there exist few publicly available databases of 3D video content. Further, there are even fewer sources of uncompressed 3D video content for flexible use in a number of research studies and applications. This paper thus presents a preliminary version of RMIT3DV: an uncompressed HD 3D video database currently composed of 31 video sequences that encompass a range of environments, lighting conditions, textures, motion, etc. The database was natively filmed on a professional HD 3D camera, and this paper describes the 3D film production workflow in addition to the database distribution and potential future applications of the content. The database is freely available online via the creative commons license, and researchers are encouraged to contribute 3D content to grow the resource for the (HD) 3D video research community.
Ifcm: Fuzzy Clustering For Rule Extraction Of Interval Type-2 Fuzzy Logic System Compared with the traditional Type-1 fuzzy logic system, Type-2 fuzzy logic systems (T2FLS) are suitable to handle the situations where a great deal of uncertainty are present. However, how to extract fuzzy rules automatically from input/output data is still an important issue because sometimes human experts can not get valid rules from unknown systems. Fuzzy c-Means clustering (FCM) is one of algorithms used frequently to extract rules from Type-1 fuzzy logic system, but its application is merely limited to dots set. This paper introduces an enhanced clustering algorithm, called the interval fuzzy c-means clustering (IFCM), which is adequate to deal with interval sets. Moreover, it is shown that the proposed IFCM algorithm can be used to extract fuzzy rules from interval Type-2 fuzzy logic system. Simulation results are included in the end to show the validity of IFCM.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.107651
0.036795
0.035649
0.018547
0.005771
0.001697
0.000365
0.000056
0.000001
0
0
0
0
0
Resolution of a system of fuzzy polynomial equations using the Gröbner basis The occurrence of imprecision in the real world is inevitable due to some unexpected situations. The imprecision is often involved in any engineering design process. The imprecision and uncertainty are often interpreted as fuzziness. Fuzzy systems have an essential role in the uncertainty modelling, which can formulate the uncertainty in the actual environment. In this paper, a new approach is proposed to solve a system of fuzzy polynomial equations based on the Grobner basis. In this approach, first, the h-cut of a system of fuzzy polynomial equations is computed, and a parametric form for the fuzzy system with respect to the parameter of h is obtained. Then, a Grobner basis is computed for the ideal generated by the h-cuts of the system with respect to the lexicographical order using Faugere's algorithm, i.e., F"4 algorithm. The Grobner basis of the system has an upper triangular structure. Therefore, the system can be solved using the forward substitution. Hence, all the solutions of the system of fuzzy polynomial equations can easily be obtained. Finally, the proposed approach is compared with the current numerical methods. Some theorems together with some numerical examples and applications are presented to show the efficiency of our method with respect to the other methods.
The concept of a linguistic variable and its application to approximate reasoning-III By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. I:or example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e., young, not young, very young, quite young, old, not very oldand not very young, etc., rather than 20, 21, 22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (&?, T(z), U, G,M) in which &? is the name of the variable; T(s) is the term-set of2 , that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(z); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U The meaning of a linguistic value X is characterized by a compatibility function, c : l/ + (0, I), which associates with each u in U its compati- bility with X. Thus, the COItIpdtibiiity of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compdtibihties of the so- called primary terms in a composite linguistic value-e.g.,.young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectivesand and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The
The Vienna Definition Language
Artificial Paranoia
From Computing with Numbers to Computing with Words - From Manipulation of Measurements to Manipulation of Perceptions Interest in issues relating to consciousness has grown markedly during the last several years. And yet, nobody can claim that consciousness is a well-understood concept that lends itself to precise analysis. It may be argued that, as a concept, consciousness is much too complex to fit into the conceptual structure of existing theories based on Aristotelian logic and probability theory. An approach suggested in this paper links consciousness to perceptions and perceptions to their descriptors in a natural language. In this way, those aspects of consciousness which relate to reasoning and concept formation are linked to what is referred to as the methodology of computing with words (CW). Computing, in its usual sense, is centered on manipulation of numbers and symbols. In contrast, computing with words, or CW for short, is a methodology in which the objects of computation are words and propositions drawn from a natural language (e.g., small, large, far, heavy, not very likely, the price of gas is low and declining, Berkeley is near San Francisco, it is very unlikely that there will be a significant increase in the price of oil in the near future, etc.). Computing with words is inspired by the remarkable human capability to perform a wide variety of physical and mental tasks without any measurements and any computations. Familiar examples of such tasks are parking a car, driving in heavy traffic, playing golf, riding a bicycle, understanding speech, and summarizing a story. Underlying this remarkable capability is the brain's crucial ability to manipulate perceptions--perceptions of distance, size, weight, color, speed, time, direction, force, number, truth, likelihood, and other characteristics of physical and mental objects. Manipulation of perceptions plays a key role in human recognition, decision and execution processes. As a methodology, computing with words provides a foundation for a computational theory of perceptions: a theory which may have an important bearing on how humans make--and machines might make--perception-based rational decisions in an environment of imprecision, uncertainty, and partial truth. A basic difference between perceptions and measurements is that, in general, measurements are crisp, whereas perceptions are fuzzy. One of the fundamental aims of science has been and continues to be that of progressing from perceptions to measurements. Pursuit of this aim has led to brilliant successes. We have sent men to the moon; we can build computers that are capable of performing billions of computations per second; we have constructed telescopes that can explore the far reaches of the universe; and we can date the age of rocks that are millions of years old. But alongside the brilliant successes stand conspicuous underachievements and outright failures. We cannot build robots that can move with the agility of animals or humans; we cannot automate driving in heavy traffic; we cannot translate from one language to another at the level of a human interpreter; we cannot create programs that can summarize non-trivial stories; our ability to model the behavior of economic systems leaves much to be desired; and we cannot build machines that can compete with children in the performance of a wide variety of physical and cognitive tasks. It may be argued that underlying the underachievements and failures is the unavailability of a methodology for reasoning and computing with perceptions rather than measurements. An outline of such a methodology--referred to as a computational theory of perceptions--is presented in this paper. The computational theory of perceptions (CTP) is based on the methodology of CW. In CTP, words play the role of labels of perceptions, and, more generally, perceptions are expressed as propositions in a natural language. CW-based techniques are employed to translate propositions expressed in a natural language into what is called the Generalized Constraint Language (GCL). In this language, the meaning of a proposition is expressed as a generalized constraint, X isr R, where X is the constrained variable, R is the constraining relation, and isr is a variable copula in which r is an indexing variable whose value defines the way in which R constrains X. Among the basic types of constraints are possibilistic, veristic, probabilistic, random set, Pawlak set, fuzzy graph, and usuality. The wide variety of constraints in GCL makes GCL a much more expressive language than the language of predicate logic. In CW, the initial and terminal data sets, IDS and TDS, are assumed to consist of propositions expressed in a natural language. These propositions are translated, respectively, into antecedent and consequent constraints. Consequent constraints are derived from antecedent constraints through the use of rules of constraint propagation. The principal constraint propagation rule is the generalized extension principle. (ABSTRACT TRUNCATED)
Fuzzy Logic and the Resolution Principle The relationship between fuzzy logic and two-valued logic in the context of the first order predicate calctflus is discussed. It is proved that if every clause in a set of clauses is somethblg more than a "half-truth" and the most reliable clause has truth-value a and the most unreliable clause has truth-value b, then we are guaranteed that all the logical con- sequences obtained by repeatedly applying the resolution principle will have truth-value between a and b. The significance of this theorem is also discussed.
A framework for accounting for process model uncertainty in statistical static timing analysis In recent years, a large body of statistical static timing analysis and statistical circuit optimization techniques have emerged, providing important avenues to account for the increasing process variations in design. The realization of these statistical methods often demands the availability of statistical process variation models whose accuracy, however, is severely hampered by limitations in test structure design, test time and various sources of inaccuracy inevitably incurred in process characterization. Consequently, it is desired that statistical circuit analysis and optimization can be conducted based upon imprecise statistical variation models. In this paper, we present an efficient importance sampling based optimization framework that can translate the uncertainty in the process models to the uncertainty in parametric yield, thus offering the very much desired statistical best/worst-case circuit analysis capability accounting for unavoidable complexity in process characterization. Unlike the previously proposed statistical learning and probabilistic interval based techniques, our new technique efficiently computes tight bounds of the parametric circuit yields based upon bounds of statistical process model parameters while fully capturing correlation between various process variations. Furthermore, our new technique provides valuable guidance to process characterization. Examples are included to demonstrate the application of our general analysis framework under the context of statistical static timing analysis.
Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with know locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing.
On the construction of sparse tensor product spaces. Let Omega(1) subset of R-n1 and Omega(2) subset of R-n2 be two given domains and consider on each domain a multiscale sequence of ansatz spaces of polynomial exactness r(1) and r(2), respectively. In this paper, we study the optimal construction of sparse tensor products made from these spaces. In particular, we derive the resulting cost complexities to approximate functions with anisotropic and isotropic smoothness on the tensor product domain Omega(1) x Omega(2). Numerical results validate our theoretical findings.
Optimal design of a CMOS op-amp via geometric programming We describe a new method for determining component values and transistor dimensions for CMOS operational amplifiers (op-amps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result, the amplifier design problem can be expressed as a special form of optimization problem called geometric programming, for which very efficient global optimization methods have been developed. As a consequence we can efficiently determine globally optimal amplifier designs or globally optimal tradeoffs among competing performance measures such as power, open-loop gain, and bandwidth. Our method, therefore, yields completely automated sizing of (globally) optimal CMOS amplifiers, directly from specifications. In this paper, we apply this method to a specific widely used operational amplifier architecture, showing in detail how to formulate the design problem as a geometric program. We compute globally optimal tradeoff curves relating performance measures such as power dissipation, unity-gain bandwidth, and open-loop gain. We show how the method can he used to size robust designs, i.e., designs guaranteed to meet the specifications for a variety of process conditions and parameters
Using polynomial chaos to compute the influence of multiple random surfers in the PageRank model The PageRank equation computes the importance of pages in a web graph relative to a single random surfer with a constant teleportation coefficient. To be globally relevant, the teleportation coefficient should account for the influence of all users. Therefore, we correct the PageRank formulation by modeling the teleportation coefficient as a random variable distributed according to user behavior. With this correction, the PageRank values themselves become random. We present two methods to quantify the uncertainty in the random PageRank: a Monte Carlo sampling algorithm and an algorithm based the truncated polynomial chaos expansion of the random quantities. With each of these methods, we compute the expectation and standard deviation of the PageRanks. Our statistical analysis shows that the standard deviation of the PageRanks are uncorrelated with the PageRank vector.
Opposites and Measures of Extremism in Concepts and Constructs We discuss the distinction between different types of opposites, i.e. negation and antonym, in terms of their representation by fuzzy subsets. The idea of a construct in terms of Kelly's theory of personal construct is discussed. A measure of the extremism of a group of elements with respect to concept and its negation, and with respect to a concept and its antonym is introduced.
Fuzzy concepts and formal methods: some illustrative examples It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However, some system problems, particularly those drawn from the information systems (IS) problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper identifies the problem types of interest and argues that they are characterised by uncertainty and imprecision. It suggests fuzzy set theory as a useful formalism for modelling aspects of this imprecision. The paper illustrates how a fuzzy logic toolkit for Z can be applied to such problem domains. Several examples are presented illustrating the representation of imprecise concepts as fuzzy sets and relations, and soft pre-conditions and system requirements as a series of linguistically quantified propositions.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.24
0.000091
0.000015
0.000015
0.000011
0.000004
0
0
0
0
0
0
0
0
Advances in type-2 fuzzy sets and systems In this state-of-the-art paper, important advances that have been made during the past five years for both general and interval type-2 fuzzy sets and systems are described. Interest in type-2 subjects is worldwide and touches on a broad range of applications and many interesting theoretical topics. The main focus of this paper is on the theoretical topics, with descriptions of what they are, what has been accomplished, and what remains to be done.
Type-2 fuzzy hybrid expert system for prediction of tardiness in scheduling of steel continuous casting process This paper addresses an interval type-2 fuzzy (IT2F) hybrid expert system in order to predict the amount of tardiness where tardiness variables are represented by interval type-2 membership functions. For this purpose, IT2F disjunctive normal forms and fuzzy conjunctive normal forms are utilized in the inference engine. The main contribution of this paper is to present the IT2F hybrid expert system, which is the combination of the Mamdani and Sugeno methods. In order to predict the future amount of tardiness for continuous casting operation in a steel company in Canada, an autoregressive moving average model is used in the consequents of the rules. Parameters of the system are tuned by applying Adaptive-Network-Based Fuzzy Inference System. This method is compared with IT2F Takagi–Sugeno–Kang method in MATLAB, multiple-regression, and two other Type-1 fuzzy methods in literature. The results of computing the mean square error of these methods show that our proposed method has less error and high accuracy in comparison with other methods.
The three-dimensional fuzzy sets and their cut sets In this paper, a new kind of L-fuzzy set is introduced which is called the three-dimensional fuzzy set. We first put forward four kinds of cut sets on the three-dimensional fuzzy sets which are defined by the 4-valued fuzzy sets. Then, the definitions of 4-valued order nested sets and 4-valued inverse order nested sets are given. Based on them, the decomposition theorems and representation theorems are obtained. Furthermore, the left interval-valued intuitionistic fuzzy sets and the right interval-valued intuitionistic fuzzy sets are introduced. We show that the lattices constructed by these two special L-fuzzy sets are not equivalent to sublattices of lattice constructed by the interval-valued intuitionistic fuzzy sets. Finally, we show that the three-dimensional fuzzy set is equivalent to the left interval-valued intuitionistic fuzzy set or the right interval-valued intuitionistic fuzzy set.
Three new cut sets of fuzzy sets and new theories of fuzzy sets Three new cut sets are introduced from the view points of neighborhood and Q-neighborhood in fuzzy topology and their properties are discussed. By the use of these cut sets, new decomposition theorems, new representation theorems, new extension principles and new fuzzy linear mappings are obtained. Then inner project of fuzzy relations, generalized extension principle and new composition rule of fuzzy relations are given. In the end, we present axiomatic descriptions for different cut sets and show the three most intrinsic properties for each cut set.
Xor-Implications and E-Implications: Classes of Fuzzy Implications Based on Fuzzy Xor The main contribution of this paper is to introduce an autonomous definition of the connective ''fuzzy exclusive or'' (fuzzy Xor, for short), which is independent from others connectives. Also, two canonical definitions of the connective Xor are obtained from the composition of fuzzy connectives, and based on the commutative and associative properties related to the notions of triangular norms, triangular conorms and fuzzy negations. We show that the main properties of the classical connective Xor are preserved by the connective fuzzy Xor, and, therefore, this new definition of the connective fuzzy Xor extends the related classical approach. The definitions of fuzzy Xor-implications and fuzzy E-implications, induced by the fuzzy Xor connective, are also studied, and their main properties are analyzed. The relationships between the fuzzy Xor-implications and the fuzzy E-implications with automorphisms are explored.
Type-2 Fuzzy Soft Sets and Their Applications in Decision Making. Molodtsov introduced the theory of soft sets, which can be used as a general mathematical tool for dealing with uncertainty. This paper aims to introduce the concept of the type-2 fuzzy soft set by integrating the type-2 fuzzy set theory and the soft set theory. Some operations on the type-2 fuzzy soft sets are given. Furthermore, we investigate the decision making based on type-2 fuzzy soft sets. By means of level soft sets, we propose an adjustable approach to type-2 fuzzy-soft-set based decision making and give some illustrative examples. Moreover, we also introduce the weighted type-2 fuzzy soft set and examine its application to decision making.
Technology evaluation through the use of interval type-2 fuzzy sets and systems Even though fuzzy logic is one of the most common methodologies for matching different kind of data sources, there is no study which uses this methodology for matching publication and patent data within a technology evaluation framework according to the authors' best knowledge. In order to fill this gap and to demonstrate the usefulness of fuzzy logic in technology evaluation, this study proposes a novel technology evaluation framework based on an advanced/improved version of fuzzy logic, namely; interval type-2 fuzzy sets and systems (IT2FSSs). This framework uses patent data obtained from the European Patent Office (EPO) and publication data obtained from Web of Science/Knowledge (WoS/K) to evaluate technology groups with respect to their trendiness. Since it has been decided to target technology groups, patent and publication data sources are matched through the use IT2FSSs. The proposed framework enables us to make a strategic evaluation which directs considerations to use-inspired basic researches, hence achieving science-based technological improvements which are more beneficial for society. A European Classification System (ECLA) class - H01-Basic Electric Elements - is evaluated by means of the proposed framework in order to demonstrate how it works. The influence of the use of IT2FSSs is investigated by comparison with the results of its type-1 counterpart. This method shows that the use of type-2 fuzzy sets, i.e. handling more uncertainty, improves technology evaluation outcomes.
The sampling method of defuzzification for type-2 fuzzy sets: Experimental evaluation For generalised type-2 fuzzy sets the defuzzification process has historically been slow and inefficient. This has hampered the development of type-2 Fuzzy Inferencing Systems for real applications and therefore no advantage has been taken of the ability of type-2 fuzzy sets to model higher levels of uncertainty. The research reported here provides a novel approach for improving the speed of defuzzification for discretised generalised type-2 fuzzy sets. The traditional type-reduction method requires every embedded type-2 fuzzy set to be processed. The high level of redundancy in the huge number of embedded sets inspired the development of our sampling method which randomly samples the embedded sets and processes only the sample. The paper presents detailed experimental results for defuzzification of constructed sets of known defuzzified value. The sampling defuzzifier is compared on aggregated type-2 fuzzy sets resulting from the inferencing stage of a FIS, in terms of accuracy and speed, with other methods including the exhaustive and techniques based on the @a-planes representation. The results indicate that by taking only a sample of the embedded sets we are able to dramatically reduce the time taken to process a type-2 fuzzy set with very little loss in accuracy.
Evaluation model of business intelligence for enterprise systems using fuzzy TOPSIS Evaluation of business intelligence for enterprise systems before buying and deploying them is of vital importance to create decision support environment for managers in organizations. This study aims to propose a new model to provide a simple approach to assess enterprise systems in business intelligence aspects. This approach also helps the decision-maker to select the enterprise system which has suitable intelligence to support managers' decisional tasks. Using wide literature review, 34 criteria about business intelligence specifications are determined. A model that exploits fuzzy TOPSIS technique has been proposed in this research. Fuzzy weights of the criteria and fuzzy judgments about enterprise systems as alternatives are employed to compute evaluation scores and ranking. This application is realized to illustrate the utilization of the model for the evaluation problems of enterprise systems. On this basis, organizations will be able to select, assess and purchase enterprise systems which make possible better decision support environment in their work systems.
Evaluating the informative quality of documents in SGML format from judgements by means of fuzzy linguistic techniques based on computing with words Recommender systems evaluate and filter the great amount of information available on the Web to assist people in their search processes. A fuzzy evaluation method of Standard Generalized Markup Language documents based on computing with words is presented. Given a document type definition (DTD), we consider that its elements are not equally informative. This is indicated in the DTD by defining linguistic importance attributes to the more meaningful elements of DTD chosen. Then, the evaluation method generates linguistic recommendations from linguistic evaluation judgements provided by different recommenders on meaningful elements of DTD. To do so, the evaluation method uses two quantifier guided linguistic aggregation operators, the linguistic weighted averaging operator and the linguistic ordered weighted averaging operator, which allow us to obtain recommendations taking into account the fuzzy majority of the recommenders' judgements. Using the fuzzy linguistic modeling the user-system interaction is facilitated and the assistance of system is improved. The method can be easily extended on the Web to evaluate HyperText Markup Language and eXtensible Markup Language documents.
Correlation Coefficients of Hesitant Fuzzy Sets and Their Application Based on Fuzzy Measures In this paper, several new correlation coefficients of hesitant fuzzy sets are defined, not taking into account the length of hesitant fuzzy elements and the arrangement of their possible values. To address the situations where the elements in a set are correlative, several Shapley weighted correlation coefficients are presented. It is worth noting that the Shapley weighted correlation coefficient can be seen as an extension of the correlation coefficient based on additive measures. When the weight information of attributes is partly known, models for the optimal fuzzy measure on an attribute set are constructed. After that, an approach to clustering analysis and decision making under hesitant fuzzy environment with incomplete weight information and interactive conditions is developed. Meanwhile, corresponding examples are provided to verify the practicality and feasibility of the new approaches.
Sparse Event Detection In Wireless Sensor Networks Using Compressive Sensing Compressive sensing is a revolutionary idea proposed recently to achieve much lower sampling rate for sparse signals. For large wireless sensor networks, the events are relatively sparse compared with the number of sources. Because of deployment cost, the number of sensors is limited, and due to energy constraint, not all the sensors are turned on all the time. In this paper, the first contribution is to formulate the problem for sparse event detection in wireless sensor networks as a compressive sensing problem. The number of (wake-up) sensors can be greatly reduced to the similar level of the number of sparse events, which is much smaller than the total number of sources. Second, we suppose the event has the binary nature, and employ the Bayesian detection using this prior information. Finally, we analyze the performance of the compressive sensing algorithms under the Gaussian noise. From the simulation results, we show that the sampling rate can reduce to 25% without sacrificing performance. With further decreasing the sampling rate, the performance is gradually reduced until 10% of sampling rate. Our proposed detection algorithm has much better performance than the L-1-magic algorithm proposed in the literature.
Highly connected monochromatic subgraphs of multicolored graphs We consider the following question of Bollobás: given an r-coloring of E(Kn), how large a k-connected subgraph can we find using at most s colors? We provide a partial solution to this problem when s=1 (and n is not too small), showing that when r=2 the answer is n-2k+2, when r=3 the answer is ⌊(n-k)-2⌋+1 or ⌈(n-k)-2⌉+1, and when r-1 is a prime power then the answer lies between n-(r-1)-11(k2-k)r and (n-k+1)-(r-1)+r. The case s≥2 is considered in a subsequent paper (Liu et al.[6]), where we also discuss some of the more glaring open problems relating to this question. © 2009 Wiley Periodicals, Inc. J. Graph Theory 61: 22-44, 2009 The work reported in this paper was done when the authors were at the University of Memphis.
Path Criticality Computation in Parameterized Statistical Timing Analysis Using a Novel Operator This paper presents a method to compute criticality probabilities of paths in parameterized statistical static timing analysis. We partition the set of all the paths into several groups and formulate the path criticality into a joint probability of inequalities. Before evaluating the joint probability directly, we simplify the inequalities through algebraic elimination, handling topological correlation. Our proposed method uses conditional probabilities to obtain the joint probability, and statistics of random variables representing process parameters are changed to take into account the conditions. To calculate the conditional statistics of the random variables, we derive analytic formulas by extending Clark's work. This allows us to obtain the conditional probability density function of a path delay, given the path is critical, as well as to compute criticality probabilities of paths. Our experimental results show that the proposed method provides 4.2X better accuracy on average in comparison to the state-of-art method.
1.003882
0.006486
0.005821
0.003907
0.00299
0.002703
0.001837
0.001189
0.000698
0.000157
0.000006
0
0
0
ENDE: An End-to-end Network Delay Emulator Tool for Multimedia Protocol Development Multimedia applications and protocols are constantly being developed to run over the Internet. A new protocol or application after being developed has to be tested on the real Internet or simulated on a testbed for debugging and performance evaluation. In this paper, we present a novel tool, ENDE, that can emulate end-to-end delays between two hosts without requiring access to the second host. The tool enables the user to test new multimedia protocols realistically on a single machine. In a delay-observing mode, ENDE can generate accurate traces of one-way delays between two hosts on the network. In a delay-impacting mode, ENDE can be used to simulate the functioning of a protocol or an application as if it were running on the network. We will show that ENDE allows accurate estimation of one-way transit times and hence can be used even when the forward and reverse paths are asymmetric between the two hosts. Experimental results are also presented to show that ENDE is fairly accurate in the delay-impacting mode.
Scalability and accuracy in a large-scale network emulator This paper presents ModelNet, a scalable Internet emulation environment that enables researchers to deploy unmodified software prototypes in a configurable Internet-like environment and subject them to faults and varying network conditions. Edge nodes running user-specified OS and application software are configured to route their packets through a set of ModelNet core nodes, which cooperate to subject the traffic to the bandwidth, congestion constraints, latency, and loss profile of a target network topology.This paper describes and evaluates the ModelNet architecture and its implementation, including novel techniques to balance emulation accuracy against scalability. The current ModelNet prototype is able to accurately subject thousands of instances of a distrbuted application to Internet-like conditions with gigabits of bisection bandwidth. Experiments with several large-scale distributed services demonstrate the generality and effectiveness of the infrastructure.
Traffic data repository at the WIDE project It becomes increasingly important for both network researchers and operators to know the trend of network traffic and to find anomaly in their network traffic. This paper describes an on-going effort within the WIDE project to collect a set of free tools to build a traffic data repository containing detailed information of our backbone traffic. Traffic traces are collected by tcpdump and, after removing privacy information, the traces are made open to the public. We review the issues on user privacy, and then, the tools used to build the WIDE traffic repository. We will report the current status and findings in the early stage of our IPv6 deployment.
Traffic Monitoring and Analysis, Second International Workshop, TMA 2010, Zurich, Switzerland, April 7, 2010, Proceedings
Resequencing considerations in parallel downloads Several recent studies have proposed methods to ac- celerate the receipt of a file by downloading its parts from differ- ent servers in parallel. This paper formulates models for an ap- proach based on receiving only one copy of each of the data pack- ets in a file, while different packets may be obtained from different sources. This approach guarantees faster downloads with lower network use. However, out-of-order arrivals at the receiving side are unavoidable. We present methods to keep out-of-order low to insure more regulated flow of packets to the application. Recent papers indicate that out-of-order arrivals have many unfavorable consequences. A good indicator to the severeness of out-of-order arrival is the resequencing-buffer occupancy. The paper focuses on the analysis of the resequencing-buffer occupancy distribution and on the analysis of the methods used to reduce the occupancy of the buffer.
QoE-based packet dropper controllers for multimedia streaming in WiMAX networks. The proliferation of broadband wireless facilities, together with the demand for multimedia applications, are creating a wireless multimedia era. In this scenario, the key requirement is the delivery of multimedia content with Quality of Service (QoS) and Quality of Experience (QoE) support for thousands of users (and access networks) in broadband in the wireless systems of the next generation.. This paper sets out new QoE-aware packet controller mechanisms to keep video streaming applications at an acceptable level of quality in Worldwide Interoperability for Microwave Access (WiMAX) networks. In periods of congestion, intelligent packet dropper mechanisms for IEEE 802.16 systems are triggered to drop packets in accordance with their impact on user perception, intra-frame dependence, Group of Pictures (GoP) and available wireless resources in service classes. The simulation results show that the proposed solutions reduce the impact of multimedia flows on the user's experience and optimize wireless network resources in periods of congestion.. The benefits of the proposed schemes were evaluted in a simulated WiMAX QoS/QoE environment, by using the following well-known QoE metrics: Peak Signal-to-Noise Ratio (PSNR), Video Quality Metric (VQM), Structural Similarity Index (SSIM) and Mean Option Score (MOS).
Visibility Of Individual Packet Losses In Mpeg-2 Video The ability of a human to visually detect whether a packet has been lost during the transport of compressed video depends heavily on the location of the packet loss and the content or the video. In this paper, we explore when humans can visually detect the error caused by individual packet losses. Using the results of a subjective test based on 1080 packet losses in 72 minutes of video, we design a classifier that uses objective factors extracted from the video to predict to visibility of each error. Our classifier achieves over 93% accuracy.
Adaptation strategies for streaming SVC video This paper aims to determine the best rate adaptation strategy to maximize the received video quality when streaming SVC video over the Internet. Different bandwidth estimation techniques are implemented for different transport protocols, such as using the TFRC rate when available or calculating the packet transmission rate otherwise. It is observed that controlling the rate of packets dispatched to the transport queue to match the video extraction rate resulted in oscillatory behavior in DCCP CCID3, decreasing the received video quality. Experimental results show that video should be sent at the maximum available network rate rather than at the extraction rate, provided that receiver buffer does not overflow. When the network is over-provisioned, the packet dispatch rate may also be limited with the maximum extractable video rate, to decrease the retransmission traffic without affecting the received video quality.
Techniques for measuring quality of experience Quality of Experience (QoE) relates to how users perceive the quality of an application. To capture such a subjective measure, either by subjective tests or via objective tools, is an art on its own. Given the importance of measuring users’ satisfaction to service providers, research on QoE took flight in recent years. In this paper we present an overview of various techniques for measuring QoE, thereby mostly focusing on freely available tools and methodologies.
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f∈CN and a randomly chosen set of frequencies Ω. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=στ∈Tf(τ)δ(t-τ) obeying |T|≤CM·(log N)-1 · |Ω| for some constant CM0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N-M), f can be reconstructed exactly as the solution to the ℓ1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for CM which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|·logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N-M) would in general require a number of frequency samples at least proportional to |T|·logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
Block-sparse signals: uncertainty relations and efficient recovery We consider efficient methods for the recovery of block-sparse signals--i.e., sparse signals that have nonzero entries occurring in clusters--from an underdetermined system of linear equations. An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which we introduce. We then show that a block-version of the orthogonal matching pursuit algorithm recovers block k-sparse signals in no more than k steps if the block-coherence is sufficiently small. The same condition on block-coherence is shown to guarantee successful recovery through a mixed l2/l1-optimization approach. This complements previous recovery results for the block-sparse case which relied on small block-restricted isometry constants. The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.
The collapsing method of defuzzification for discretised interval type-2 fuzzy sets This paper proposes a new approach for defuzzification of interval type-2 fuzzy sets. The collapsing method converts an interval type-2 fuzzy set into a type-1 representative embedded set (RES), whose defuzzified values closely approximates that of the type-2 set. As a type-1 set, the RES can then be defuzzified straightforwardly. The novel representative embedded set approximation (RESA), to which the method is inextricably linked, is expounded, stated and proved within this paper. It is presented in two forms: Simple RESA: this approximation deals with the most simple interval FOU, in which a vertical slice is discretised into 2 points. Interval RESA: this approximation concerns the case in which a vertical slice is discretised into 2 or more points. The collapsing method (simple RESA version) was tested for accuracy and speed, with excellent results on both criteria. The collapsing method proved more accurate than the Karnik-Mendel iterative procedure (KMIP) for an asymmetric test set. For both a symmetric and an asymmetric test set, the collapsing method outperformed the KMIP in relation to speed.
The n-dimensional fuzzy sets and Zadeh fuzzy sets based on the finite valued fuzzy sets The connections among the n-dimensional fuzzy set, Zadeh fuzzy set and the finite-valued fuzzy set are established in this paper. The n-dimensional fuzzy set, a special L-fuzzy set, is first defined. It is pointed out that the n-dimensional fuzzy set is a generalization of the Zadeh fuzzy set, the interval-valued fuzzy set, the intuitionistic fuzzy set, the interval-valued intuitionistic fuzzy set and the three dimensional fuzzy set. Then, the definitions of cut set on n-dimensional fuzzy set and n-dimensional vector level cut set of Zadeh fuzzy set are presented. The cut set of the n-dimensional fuzzy set and n-dimensional vector level set of the Zadeh fuzzy set are both defined as n+1-valued fuzzy sets. It is shown that a cut set defined in this way has the same properties as a normal cut set of the Zadeh fuzzy set. Finally, by the use of these cut sets, decomposition and representation theorems of the n-dimensional fuzzy set and new decomposition and representation theorems of the Zadeh fuzzy set are constructed.
An Interval-Valued Intuitionistic Fuzzy Rough Set Model Given a widespread interest in rough sets as being applied to various tasks of data analysis it is not surprising at all that we have witnessed a wave of further generalizations and algorithmic enhancements of this original concept. This paper proposes an interval-valued intuitionistic fuzzy rough model by means of integrating the classical Pawlak rough set theory with the interval-valued intuitionistic fuzzy set theory. Firstly, some concepts and properties of interval-valued intuitionistic fuzzy set and interval-valued intuitionistic fuzzy relation are introduced. Secondly, a pair of lower and upper interval-valued intuitionistic fuzzy rough approximation operators induced from an interval-valued intuitionistic fuzzy relation is defined, and some properties of approximation operators are investigated in detail. Furthermore, by introducing cut sets of interval-valued intuitionistic fuzzy sets, classical representations of interval-valued intuitionistic fuzzy rough approximation operators are presented. Finally, the connections between special interval-valued intuitionistic fuzzy relations and interval-valued intuitionistic fuzzy rough approximation operators are constructed, and the relationships of this model and the others rough set models are also examined.
1.11256
0.104213
0.104213
0.104213
0.074123
0.002133
0.000213
0.000067
0.00002
0
0
0
0
0
A new linguistic computational model based on discrete fuzzy numbers for computing with words In recent years, several different linguistic computational models for dealing with linguistic information in processes of computing with words have been proposed. However, until now all of them rely on the special semantics of the linguistic terms, usually fuzzy numbers in the unit interval, and the linguistic aggregation operators are based on aggregation operators in [0,1]. In this paper, a linguistic computational model based on discrete fuzzy numbers whose support is a subset of consecutive natural numbers is presented ensuring the accuracy and consistency of the model. In this framework, no underlying membership functions are needed and several aggregation operators defined on the set of all discrete fuzzy numbers are presented. These aggregation operators are constructed from aggregation operators defined on a finite chain in accordance with the granularity of the linguistic term set. Finally, an example of a multi-expert decision-making problem in a hierarchical multi-granular linguistic context is given to illustrate the applicability of the proposed method and its advantages.
An ordinal approach to computing with words and the preference-aversion model Computing with words (CWW) explores the brain's ability to handle and evaluate perceptions through language, i.e., by means of the linguistic representation of information and knowledge. On the other hand, standard preference structures examine decision problems through the decomposition of the preference predicate into the simpler situations of strict preference, indifference and incomparability. Hence, following the distinctive cognitive/neurological features for perceiving positive and negative stimuli in separate regions of the brain, we consider two separate and opposite poles of preference and aversion, and obtain an extended preference structure named the Preference-aversion (P-A) structure. In this way, examining the meaning of words under an ordinal scale and using CWW's methodology, we are able to formulate the P-A model under a simple and purely linguistic approach to decision making, obtaining a solution based on the preference and non-aversion order.
Generalised Interval-Valued Fuzzy Soft Set. We introduce the concept of generalised interval-valued fuzzy soft set and its operations and study some of their properties. We give applications of this theory in solving a decision making problem. We also introduce a similarity measure of two generalised interval-valued fuzzy soft sets and discuss its application in a medical diagnosis problem: fuzzy set; soft set; fuzzy soft set; generalised fuzzy soft set; generalised interval-valued fuzzy soft set; interval-valued fuzzy set; interval-valued fuzzy soft set.
Linguistic Interval Hesitant Fuzzy Sets and Their Application in Decision Making To cope with the hesitancy and uncertainty of the decision makers’ cognitions to decision-making problems, this paper introduces a new type of fuzzy sets called linguistic interval hesitant fuzzy sets. A linguistic interval hesitant fuzzy set is composed of several linguistic terms with each one having several interval membership degrees. Considering the application of linguistic interval hesitant fuzzy sets in decision making, an ordered relationship is offered, and several operational laws are defined. After that, several aggregation operators based on additive and fuzzy measures are introduced, by which the comprehensive attribute values can be obtained. Based on the defined distance measure, models for the optimal weight vectors are constructed. In addition, an approach to multi-attribute decision making with linguistic interval hesitant fuzzy information is developed. Finally, two numerical examples are provided to show the concrete application of the procedure.
An investment evaluation of supply chain RFID technologies: A group decision-making model with multiple information sources. Selection of radio frequency identification (RFID) technology is important to improving supply chain competitiveness. The objective of this paper is to develop a group decision-making model using fuzzy multiple attributes analysis to evaluate the suitability of supply chain RFID technology. Since numerous attributes have been considered in evaluating the RFID technology suitability, most information available in this stage exhibits imprecise, subjective and vague. Fuzzy set theory appears as an essential tool to provide a decision framework for modeling imprecision and vagueness inherent in the RFID technology selection process. In this paper, a fuzzy multiple attributes group decision-making algorithm using the principles of fusion of fuzzy information, 2-tuple linguistic representation model, and maximum entropy ordered weighted averaging operator is developed. The proposed method is apt to manage evaluation information assessed using both linguistic and numerical scales in group decision making problem with multiple information sources. The aggregation process is based on the unification of fuzzy information by means of fuzzy sets on a basic linguistic term set. Then, the unified information is transformed into linguistic 2-tuple in a way to rectify the problem of loss information of other fuzzy linguistic approaches. The proposed method can facilitate the complex RFID technology selection process and consolidate efforts to enhance group decision-making process. Additionally, this study presents an example using a case study to illustrate the availability of the proposed method and its advantages.
Linguistic hesitant fuzzy multi-criteria decision-making method based on evidential reasoning Linguistic hesitant fuzzy sets LHFSs, which can be used to represent decision-makers’ qualitative preferences as well as reflect their hesitancy and inconsistency, have attracted a great deal of attention due to their flexibility and efficiency. This paper focuses on a multi-criteria decision-making approach that combines LHFSs with the evidential reasoning ER method. After reviewing existing studies of LHFSs, a new order relationship and Hamming distance between LHFSs are introduced and some linguistic scale functions are applied. Then, the ER algorithm is used to aggregate the distributed assessment of each alternative. Subsequently, the set of aggregated alternatives on criteria are further aggregated to get the overall value of each alternative. Furthermore, a nonlinear programming model is developed and genetic algorithms are used to obtain the optimal weights of the criteria. Finally, two illustrative examples are provided to show the feasibility and usability of the method, and comparison analysis with the existing method is made.
Fuzzy multiple criteria forestry decision making based on an integrated VIKOR and AHP approach Forestation and forest preservation in urban watersheds are issues of vital importance as forested watersheds not only preserve the water supplies of a city but also contribute to soil erosion prevention. The use of fuzzy multiple criteria decision aid (MCDA) in urban forestation has the advantage of rendering subjective and implicit decision making more objective and transparent. An additional merit of fuzzy MCDA is its ability to accommodate quantitative and qualitative data. In this paper an integrated VIKOR-AHP methodology is proposed to make a selection among the alternative forestation areas in Istanbul. In the proposed methodology, the weights of the selection criteria are determined by fuzzy pairwise comparison matrices of AHP. It is found that Omerli watershed is the most appropriate forestation district in Istanbul.
Fuzzy Linguistic PERT A model for Program Evaluation and Review Technique (PERT) under fuzzy linguistic contexts is introduced. In this fuzzy linguistic PERT network model, each activity duration is represented by a fuzzy linguistic description. Aggregation and comparison of the estimated linguistic expectations of activity durations are manipulated by the techniques of computing with words (CW). To provide suitable contexts for this purpose, we first introduce several variations of basic linguistic labels of a linguistic variable, such as weighted linguistic labels, generalized linguistic labels and weighted generalized linguistic labels etc., and then based on the notion of canonical characteristic value (CCV) function of a linguistic variable, we develop some related CW techniques for aggregation and comparison of these linguistic labels. Afterward, using a computing technique of linguistic probability introduced by Zadeh and based on the new developed CW techniques for weighted generalized linguistic labels, we investigate the associated linguistic expectation PERT network of a fuzzy linguistic PERT network. Also, throughout the paper, several examples are used to illustrate related notions and applications
Pythagorean Fuzzy Choquet Integral Based MABAC Method for Multiple Attribute Group Decision Making. In this paper, we define the Choquet integral operator for Pythagorean fuzzy aggregation operators, such as Pythagorean fuzzy Choquet integral average PFCIA operator and Pythagorean fuzzy Choquet integral geometric PFCIG operator. The operators not only consider the importance of the elements or their ordered positions but also can reflect the correlations among the elements or their ordered positions. It is worth pointing out that most of the existing Pythagorean fuzzy aggregation operators are special cases of our operators. Meanwhile, some basic properties are discussed in detail. Later, we propose two approaches to multiple attribute group decision making with attributes involving dependent and independent by the PFCIA operator and multi-attributive border approximation area comparison MABAC in Pythagorean fuzzy environment. Finally, two illustrative examples have also been taken in the present study to verify the developed approaches and to demonstrate their practicality and effectiveness.
TOPSIS-Based Nonlinear-Programming Methodology for Multiattribute Decision Making With Interval-Valued Intuitionistic Fuzzy Sets Interval-valued intuitionistic fuzzy (IVIF) sets are useful to deal with fuzziness inherent in decision data and decision-making processes. The aim of this paper is to develop a nonlinear-programming methodology that is based on the technique for order preference by similarity to ideal solution to solve multiattribute decision-making (MADM) problems with both ratings of alternatives on attributes and weights of attributes expressed with IVIF sets. In this methodology, nonlinear-programming models are constructed on the basis of the concepts of the relative-closeness coefficient and the weighted-Euclidean distance. Simpler auxiliary nonlinear-programming models are further deduced to calculate relative-closeness of IF sets of alternatives to the IVIF-positive ideal solution, which can be used to generate the ranking order of alternatives. The proposed methodology is validated and compared with other similar methods. A real example is examined to demonstrate the applicability and validity of the methodology proposed in this paper.
An Approach To Interval-Valued R-Implications And Automorphisms The aim of this work is to introduce an approach for interval-valued R-implications, which satisfy some analogous properties of R-implications. We show that the best interval representation of an R-implication that is obtained from a left continuous t-norm coincides with the interval-valued R-implication obtained from the best interval representation of such t-norm, whenever this is an inclusion monotonic interval function. This provides, under this condition, a nice characterization for the best interval representation of an R-implication, which is also an interval-valued R-implication. We also introduce interval-valued automorphisms as the best interval representations of automorphisms. It is shown that interval automorphisms act on interval R-implications, generating other interval R-implications.
The Logarithmic Nature of QoE and the Role of the Weber-Fechner Law in QoE Assessment The Weber-Fechner Law (WFL) is an important principle in psychophysics which describes the relationship be- tween the magnitude of a physical stimulus and its perceived intensity. With the sensory system of the human body, in many cases this dependency turns out to be of logarithmic nature. Re- cent quantitative QoE research shows that in several different scenarios a similar logarithmic relationship can be observed be- tween the size of a certain QoS parameter of the communication system and the resulting QoE on the user side as observed during appropriate user trials. In this paper, we discuss this surprising link in more detail. After a brief survey on the background of the WFL, we review its basic implications with respect to related work on QoE assessment for VoIP, most notably the recently published IQX hypothesis, before we present results of our own trials on QoE assessment for mobile broadband scenarios which confirm this dependency also for data services. Finally, we point out some conclusions and directions for further research.
Sublinear compressive sensing reconstruction via belief propagation decoding We propose a new compressive sensing scheme, based on codes of graphs, that allows for joint design of sensing matrices and low complexity reconstruction algorithms. The compressive sensing matrices can be shown to offer asymptotically optimal performance when used in combination with OMP methods. For more elaborate greedy reconstruction schemes, we propose a new family of list decoding and multiple-basis belief propagation algorithms. Our simulation results indicate that the proposed CS scheme offers good complexity-performance tradeoffs for several classes of sparse signals.
Highly connected multicoloured subgraphs of multicoloured graphs Suppose the edges of the complete graph on n vertices, E(K"n), are coloured using r colours; how large a k-connected subgraph are we guaranteed to find, which uses only at most s of the colours? This question is due to Bollobas, and the case s=1 was considered in Liu et al. [Highly connected monochromatic subgraphs of multicoloured graphs, J. Graph Theory, to appear]. Here we shall consider the case s>=2, proving in particular that when s=2 and r+1 is a power of 2 then the answer lies between 4n/(r+1)-17kr(r+2k+1) and 4n/(r+1)+4, that if r=2s+1 then the answer lies between (1-1/rs)n-7rsk and (1-1/rs)n+1, and that phase transitions occur at [email protected]?r/[email protected]? and [email protected](r). We shall also mention some of the more glaring open problems relating to this question.
1.011256
0.023069
0.023069
0.010388
0.010074
0.005385
0.003626
0.001862
0.000499
0.00009
0
0
0
0
Case-based decision support
The Scientific Community Metaphor Scientific communities have proven to be extremely successful at solving problems. They are inherently parallel systems and their macroscopic nature makes them amenable to careful study. In this paper the character of scientific research is examined drawing on sources in the philosophy and history of science. We maintain that the success of scientific research depends critically on its concurrency and pluralism. A variant of the language Ether is developed that embodies notions of concurrency necessary to emulate some of the problem solving behavior of scientific communities. Capabilities of scientific communities are discussed in parallel with simplified models of these capabilities in this language.
On Agent-Mediated Electronic Commerce This paper surveys and analyzes the state of the art of agent-mediated electronic commerce (e-commerce), concentrating particularly on the business-to-consumer (B2C) and business-to-business (B2B) aspects. From the consumer buying behavior perspective, agents are being used in the following activities: need identification, product brokering, buyer coalition formation, merchant brokering, and negotiation. The roles of agents in B2B e-commerce are discussed through the business-to-business transaction model that identifies agents as being employed in partnership formation, brokering, and negotiation. Having identified the roles for agents in B2C and B2B e-commerce, some of the key underpinning technologies of this vision are highlighted. Finally, we conclude by discussing the future directions and potential impediments to the wide-scale adoption of agent-mediated e-commerce.
Janus - A Paradigm For Active Decision Support Active decision support is concerned with developing advanced forms of decision support where the support tools are capable of actively participating in the decision making process, and decisions are made by fruitful collaboration between the human and the machine. It is currently an active and leading area of research within the field of decision support systems. The objective of this paper is to share the details of our research in this area. We present our overall research strategy for exploring advanced forms of decision support and discuss in detail our research prototype called JANUS that implements our ideas. We establish the contributions of our work and discuss our experiences and plans for future.
Implications of buyer decision theory for design of e-commerce websites In the rush to open their website, e-commerce sites too often fail to support buyer decision making and search, resulting in a loss of sale and the customer's repeat business. This paper reviews why this occurs and the failure of many B2C and B2B website executives to understand that appropriate decision support and search technology can't be fully bought off-the-shelf. Our contention is that significant investment and effort is required at any given website in order to create the decision support and search agents needed to properly support buyer decision making. We provide a framework to guide such effort (derived from buyer behavior choice theory); review the open problems that e-catalog sites pose to the framework and to existing search engine technology; discuss underlying design principles and guidelines; validate the framework and guidelines with a case study; and discuss lessons learned and steps needed to better support buyer decision behavior in the future. Future needs are also pinpointed.
An algorithm for pronominal anaphora resolution This paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors (reflexives and reciprocals). The algorithm applies to the syntactic representations generated by McCord's Slot Grammar parser and relies on salience measures derived from syntactic structure and a simple dynamic model of attentional state. Like the parser, the algorithm is implemented in Prolog. The authors have tested it extensively on computer manual texts and conducted a blind test on manual text containing 360 pronoun occurrences. The algorithm successfully identifies the antecedent of the pronoun for 86% of these pronoun occurrences. The relative contributions of the algorithm's components to its overall success rate in this blind test are examined. Experiments were conducted with an enhancement of the algorithm that contributes statistically modelled information concerning semantic and real-world relations to the algorithm's decision procedure. Interestingly, this enhancement only marginally improves the algorithm's performance (by 2%). The algorithm is compared with other approaches to anaphora resolution that have been proposed in the literature. In particular, the search procedure of Hobbs' algorithm was implemented in the Slot Grammar framework and applied to the sentences in teh blind test set. The authors' algorithm achieves a higher rate of success (4%) than Hobbs' algorithm. The relation of the algorithm to the centering approach is discussed, as well as to models of anaphora resolution that invoke a variety of informational factors in ranking antecedent candidates.
The concept of a linguistic variable and its application to approximate reasoning-III By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. I:or example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e., young, not young, very young, quite young, old, not very oldand not very young, etc., rather than 20, 21, 22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (&?, T(z), U, G,M) in which &? is the name of the variable; T(s) is the term-set of2 , that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(z); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U The meaning of a linguistic value X is characterized by a compatibility function, c : l/ + (0, I), which associates with each u in U its compati- bility with X. Thus, the COItIpdtibiiity of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compdtibihties of the so- called primary terms in a composite linguistic value-e.g.,.young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectivesand and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The
Fuzzy logic systems for engineering: a tutorial A fuzzy logic system (FLS) is unique in that it is able to simultaneously handle numerical data and linguistic knowledge. It is a nonlinear mapping of an input data (feature) vector into a scalar output, i.e., it maps numbers into numbers. Fuzzy set theory and fuzzy logic establish the specifics of the nonlinear mapping. This tutorial paper provides a guided tour through those aspects of fuzzy sets and fuzzy logic that are necessary to synthesize an FLS. It does this by starting with crisp set theory and dual logic and demonstrating how both can be extended to their fuzzy counterparts. Because engineering systems are, for the most part, causal, we impose causality as a constraint on the development of the FLS. After synthesizing a FLS, we demonstrate that it can be expressed mathematically as a linear combination of fuzzy basis functions, and is a nonlinear universal function approximator, a property that it shares with feedforward neural networks. The fuzzy basis function expansion is very powerful because its basis functions can be derived from either numerical data or linguistic knowledge, both of which can be cast into the forms of IF-THEN rules
Convergence Rates of Best N-term Galerkin Approximations for a Class of Elliptic sPDEs Deterministic Galerkin approximations of a class of second order elliptic PDEs with random coefficients on a bounded domain D⊂ℝd are introduced and their convergence rates are estimated. The approximations are based on expansions of the random diffusion coefficients in L 2(D)-orthogonal bases, and on viewing the coefficients of these expansions as random parameters y=y(ω)=(y i (ω)). This yields an equivalent parametric deterministic PDE whose solution u(x,y) is a function of both the space variable x∈D and the in general countably many parameters y. We establish new regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to $V=H^{1}_{0}(D)$. These results lead to analytic estimates on the V norms of the coefficients (which are functions of x) in a so-called “generalized polynomial chaos” (gpc) expansion of u. Convergence estimates of approximations of u by best N-term truncated V valued polynomials in the variable y∈U are established. These estimates are of the form N −r , where the rate of convergence r depends only on the decay of the random input expansion. It is shown that r exceeds the benchmark rate 1/2 afforded by Monte Carlo simulations with N “samples” (i.e., deterministic solves) under mild smoothness conditions on the random diffusion coefficients. A class of fully discrete approximations is obtained by Galerkin approximation from a hierarchic family $\{V_{l}\}_{l=0}^{\infty}\subset V$of finite element spaces in D of the coefficients in the N-term truncated gpc expansions of u(x,y). In contrast to previous works, the level l of spatial resolution is adapted to the gpc coefficient. New regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to a smoothness space W⊂V are established leading to analytic estimates on the W norms of the gpc coefficients and on their space discretization error. The space W coincides with $H^{2}(D)\cap H^{1}_{0}(D)$in the case where D is a smooth or convex domain. Our analysis shows that in realistic settings a convergence rate $N_{\mathrm{dof}}^{-s}$in terms of the total number of degrees of freedom N dof can be obtained. Here the rate s is determined by both the best N-term approximation rate r and the approximation order of the space discretization in D.
Coding Algorithms for 3DTV—A Survey Research efforts on 3DTV technology have been strengthened worldwide recently, covering the whole media processing chain from capture to display. Different 3DTV systems rely on different 3D scene representations that integrate various types of data. Efficient coding of these data is crucial for the success of 3DTV. Compression of pixel-type data including stereo video, multiview video, and associated depth or disparity maps extends available principles of classical video coding. Powerful algorithms and open international standards for multiview video coding and coding of video plus depth data are available and under development, which will provide the basis for introduction of various 3DTV systems and services in the near future. Compression of 3D mesh models has also reached a high level of maturity. For static geometry, a variety of powerful algorithms are available to efficiently compress vertices and connectivity. Compression of dynamic 3D geometry is currently a more active field of research. Temporal prediction is an important mechanism to remove redundancy from animated 3D mesh sequences. Error resilience is important for transmission of data over error prone channels, and multiple description coding (MDC) is a suitable way to protect data. MDC of still images and 2D video has already been widely studied, whereas multiview video and 3D meshes have been addressed only recently. Intellectual property protection of 3D data by watermarking is a pioneering research area as well. The 3D watermarking methods in the literature are classified into three groups, considering the dimensions of the main components of scene representations and the resulting components after applying the algorithm. In general, 3DTV coding technology is maturating. Systems and services may enter the market in the near future. However, the research area is relatively young compared to coding of other types of media. Therefore, there is still a lot of room for improvement and new development o- f algorithms.
Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing The replica method is a nonrigorous but well-known technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an $n$-dimensional vector “decouples” as $n$ scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdú. The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, least absolute shrinkage and selection operator (LASSO), linear estimation with thresholding, and zero norm-regularized estimation. In the case of LASSO estimation, the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation, it reduces to a hard threshold. Among other benefits, the replica method provides a computationally tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability.
Preferences and their application in evolutionary multiobjective optimization The paper describes a new preference method and its use in multiobjective optimization. These preferences are developed with a goal to reduce the cognitive overload associated with the relative importance of a certain criterion within a multiobjective design environment involving large numbers of objectives. Their successful integration with several genetic-algorithm-based design search and optimi...
New Type-2 Rule Ranking Indices for Designing Parsimonious Interval Type-2 Fuzzy Logic Systems In this paper, we propose two novel indices for type-2 fuzzy rule ranking to identify the most influential fuzzy rules in designing type-2 fuzzy logic systems, and name them as R-values and c-values of fuzzy rules separately. The R-values of type-2 fuzzy rules are obtained by applying QR decomposition in which there is no need to estimate a rank as required in the SVD-QR with column pivoting algorithm. The c-values of type-2 fuzzy rules are suggested to rank rules based on the effects of rule consequents. Experimental results on a signal recovery problem have shown that by using the proposed indices the most influential type-2 fuzzy rules can be effectively selected to construct parsimonious type-2 fuzzy models while the system performances are kept at a satisfied level.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
QoE-Aware Scheduling for Video-Streaming in High Speed Downlink Packet Access With widespread use of multimedia communication, quality of experience (QoE) progressively becomes an important factor in networking today. Besides, multimedia applications can be supported under various technologies, including wired and wireless networks. Among them, Universal Mobile Telecommunications System (UMTS) is one of the most popular technology thanks to its support on mobility. Improved with a new access method (High Speed Downlink Packet Access or HSDPA), it can provide higher bandwidth and enable wider range of services including multimedia applications. In UMTS, different categories of traffic are specified along with their characteristics. Best effort traffic has been specified with less priority because it has fewer constraints. On the other hand, real-time multimedia traffic such as streaming video or VoIP are more sensitive to network condition changes, hence special treatment (e.g. QoS scheduler) is needed in order to achieve user satisfaction. According to the literature, most of scheduling mechanisms mainly take into account signal quality and fairness and do not consider user perception. In this paper, we propose a novel approach, QoE-aware scheduler that takes quality of experience into account when making scheduling decisions. Compared to other existing schedulers, QoE-aware approach has reached a profitable performance in terms of user satisfaction, throughput, and fairness.
Streaming Video Capacities of LTE Air-Interface The 3GPP Long Term Evolution (LTE) systems have been designed to deliver higher peak data rates, higher capacity and lower air-interface latency compared to prior 2G and 3G systems. This high performance will make it possible to support more demanding applications beyond web browsing and voice, which will require higher data rates and QoS guarantees. Video services are becoming very popular over the Internet. With the wide deployment of LTE in the near future, the demand for high data-rate video applications over cellular wireless will grow. However, in order to make these services commercially viable, it is necessary that the LTE air-interface can deliver high quality services to a sizeable number of users simultaneously. In this paper we investigate the downlink video capacities of the LTE air-interface by dynamic system simulation using realistic video traffic models and detailed models of the LTE air-interface. We investigate how video quality and system outage criteria impact the air-interface video capacities and describe observations on video stream quality and operator revenue under certain cost assumptions.
QoS control for WCDMA high speed packet data Wideband CDMA Release 5 is expected to support peak downlink bit rates of 10 Mbps for use with bandwidth intensive data and multimedia applications. Such rates are achieved through fast link adaptation, fast Hybrid ARQ and fast scheduling over a shared forward link packet data channel. Total throughput can be maximized by having the frame scheduler take into account the instantaneous radio conditions of users and serving users during their good radio condition periods. This results in high user diversity gains. However in order to support QoS guarantees, it will sometimes be necessary to serve users experiencing bad radio conditions in order to maintain their requested QoS levels. In this paper we present a flexible algorithm that provides user QoS guarantees while at the same time achieving some user diversity gains.
Toward enhanced mobile video services over WiMAX and LTE [WiMAX/LTE Update] Wireless networks are on the verge of a third phase of growth. The first phase was dominated by voice traffic, and the second phase, which we are currently in, is dominated by data traffic. In the third phase we predict that the traffic will be dominated by video and will require new ways to optimize the network to prevent saturation. This increase in video traffic is one of the key drivers of the evolution to new mobile broadband standards like WiMAX IEEE 802.16m and 3G LTE and LTE-Advanced, motivating the need to enhance the video service capabilities of future cellular and mobile broadband systems. Therefore, it is important to understand both the potential and limitations of these networks for delivering video content in the future, which will include not only traditional video broadcasts, but also video streaming and uploading in the uplink direction. In that vein this article provides an overview of technology options for enabling multicast and unicast video services over WiMAX and LTE networks, quantifies and compares the video capacities of these networks in realistic environments, and discusses new techniques that could be exploited in the future to further enhance the video capacity and quality of user experience.
Video Transport Evaluation With H.264 Video Traces. The performance evaluation of video transport mechanisms becomes increasingly important as encoded video accounts for growing portions of the network traffic. Compared to the widely studied MPEG-4 encoded video, the recently adopted H.264 video coding standards include novel mechanisms, such as hierarchical B frame prediction structures and highly efficient quality scalable coding, that have impor...
Quality of experience for HTTP adaptive streaming services The growing consumer demand for mobile video services is one of the key drivers of the evolution of new wireless multimedia solutions requiring exploration of new ways to optimize future wireless networks for video services towards delivering enhanced quality of experience (QoE). One of these key video enhancing solutions is HTTP adaptive streaming (HAS), which has recently been spreading as a form of Internet video delivery and is expected to be deployed more broadly over the next few years. As a relatively new technology in comparison with traditional push-based adaptive streaming techniques, deployment of HAS presents new challenges and opportunities for content developers, service providers, network operators and device manufacturers. One of these important challenges is developing evaluation methodologies and performance metrics to accurately assess user QoE for HAS services, and effectively utilizing these metrics for service provisioning and optimizing network adaptation. In that vein, this article provides an overview of HAS concepts, and reviews the recently standardized QoE metrics and reporting framework in 3GPP. Furthermore, we present an end-to-end QoE evaluation study on HAS conducted over 3GPP LTE networks and conclude with a discussion of future challenges and opportunities in QoE optimization for HAS services.
Factors influencing quality of experience of commonly used mobile applications. Increasingly, we use mobile applications and services in our daily life activities, to support our needs for information, communication or leisure. However, user acceptance of a mobile application depends on at least two conditions: the application&#39;s perceived experience, and the appropriateness of the application to the user&#39;s context and needs. However, we have a weak understanding of a mobile u...
Delivering quality of experience in multimedia networks Next-generation multimedia networks need to deliver applications with a high quality of experience (QoE) for users. Many network elements provide the building blocks for service delivery, and element managers provide performance data for specific network elements. However, this discrete measurement data is not sufficient to assess the overall end user experience with particular applications. In today's competitive world of multimedia applications, it is imperative for service providers to differentiate themselves in delivering service level agreements with certainty; otherwise they run the risk of customer churn. While QoE for well-established services like voice and Internet access is well understood, the same cannot be said about newer multimedia services. In this paper, we propose parameters for measuring the QoE for newer services. We propose and justify parameter values for satisfactory end user experience and show how standard measurement data can be collected from various network elements and processed to derive end user QoE. © 2010 Alcatel-Lucent.
ENDE: An End-to-end Network Delay Emulator Tool for Multimedia Protocol Development Multimedia applications and protocols are constantly being developed to run over the Internet. A new protocol or application after being developed has to be tested on the real Internet or simulated on a testbed for debugging and performance evaluation. In this paper, we present a novel tool, ENDE, that can emulate end-to-end delays between two hosts without requiring access to the second host. The tool enables the user to test new multimedia protocols realistically on a single machine. In a delay-observing mode, ENDE can generate accurate traces of one-way delays between two hosts on the network. In a delay-impacting mode, ENDE can be used to simulate the functioning of a protocol or an application as if it were running on the network. We will show that ENDE allows accurate estimation of one-way transit times and hence can be used even when the forward and reverse paths are asymmetric between the two hosts. Experimental results are also presented to show that ENDE is fairly accurate in the delay-impacting mode.
Sara: Segment Aware Rate Adaptation Algorithm For Dynamic Adaptive Streaming Over Http Dynamic adaptive HTTP (DASH) based streaming is steadily becoming the most popular online video streaming technique. DASH streaming provides seamless playback by adapting the video quality to the network conditions during the video playback. A DASH server supports adaptive streaming by hosting multiple representations of the video and each representation is divided into small segments of equal playback duration. At the client end, the video player uses an adaptive bitrate selection (ABR) algorithm to decide the bitrate to be selected for each segment depending on the current network conditions. Currently, proposed ABR algorithms ignore the fact that the segment sizes significantly vary for a given video bitrate. Due to this, even though an ABR algorithm is able to measure the network bandwidth, it may fail to predict the time to download the next segment. In this paper, we propose a segment-aware rate adaptation (SARA) algorithm that considers the segment size variation in addition to the estimated path bandwidth and the current buffer occupancy to accurately predict the time required to download the next segment. We also developed an open source Python based emulated DASH video player, that was used to compare the performance of SARA and a basic ABR. Our results show that SARA provides a significant gain over the basic algorithm in the video quality delivered, without noticeably impacting the video switching rates.
Proceedings of the 41th Design Automation Conference, DAC 2004, San Diego, CA, USA, June 7-11, 2004
A Linear-Time Approach for Static Timing Analysis Covering All Process Corners Manufacturing process variations lead to circuit timing variability and a corresponding timing yield loss. Traditional corner analysis consists of checking all process corners (combinations of process parameter extremes) to make sure that circuit timing constraints are met at all corners, typically by running static timing analysis (STA) at every corner. This approach is becoming too expensive due to the exponential increase in the number of corners with modern processes. As an alternative, we propose a linear-time approach for STA which covers all process corners in a single pass. Our technique assumes a linear dependence of delay on process parameters and provides tight bounds on the worst-case circuit delay. It exhibits high accuracy (within 1-3%) in practice and, if the circuit has m gates and n relevant process parameters, the complexity of the algorithm is O(mn).
Frequency domain subspace-based identification of discrete-time singular power spectra In this paper, we propose a subspace algorithm for the identification of linear-time-invariant discrete-time systems with more outputs than inputs from measured power spectrum data. The proposed identification algorithm is interpolatory and strongly consistent when the corruptions in the spectrum measurements have a bounded covariance function. Asymptotic performance and the interpolation properties of the proposed algorithm are illustrated by means of a numerical example.
Object recognition robust to imperfect depth data In this paper, we present an adaptive data fusion model that robustly integrates depth and image only perception. Combining dense depth measurements with images can greatly enhance the performance of many computer vision algorithms, yet degraded depth measurements (e.g., missing data) can also cause dramatic performance losses to levels below image-only algorithms. We propose a generic fusion model based on maximum likelihood estimates of fused image-depth functions for both available and missing depth data. We demonstrate its application to each step of a state-of-the-art image-only object instance recognition pipeline. The resulting approach shows increased recognition performance over alternative data fusion approaches.
1.053176
0.025519
0.017203
0.01646
0.009318
0.00256
0.000458
0.000132
0.000037
0.000001
0
0
0
0
Computing with words via Turing machines: a formal approach Computing with words (CW) as a methodology, means computing and reasoning by the use of words in place of numbers or symbols, which may conform more to humans' perception when describing real-world problems. In this paper, as a continuation of a previous paper, we aim to develop and deepen a formal aspect of CW. According to the previous paper, the basic point of departure is that CW treats certain formal modes of computation with strings of fuzzy subsets instead of symbols as their inputs. Specifically, 1) we elaborate on CW via Turing machine (TM) models, showing the time complexity is at least exponential if the inputs are strings of words; 2) a negative result of (6) not holding is verified which indicates that the extension principle for CW via TMs needs to be re-examined; 3) we discuss CW via context- free grammars and regular grammars and the extension principles for CW via these formal grammars are set up; 4) some equivalences between fuzzy pushdown automata (respectively, fuzzy finite-state automata) fuzzy context-free grammars (respectively, fuzzy regular grammars) are demonstrated in the sense that the inputs are instead strings of words; 5) some instances are described in detail. Summarily formal aspect of CW is more systematically established more deeply dealt with while some new problems also emerge.
Web shopping expert using new interval type-2 fuzzy reasoning Finding a product with high quality and reasonable price online is a difficult task due to uncertainty of Web data and queries. In order to handle the uncertainty problem, the Web Shopping Expert, a new type-2 fuzzy online decision support system, is proposed. In the Web Shopping Expert, a fast interval type-2 fuzzy method is used to directly use all rules with type-1 fuzzy sets to perform type-2 fuzzy reasoning efficiently. The parameters of type-2 fuzzy sets are optimized by a least square method. The Web Shopping Expert based on the interval type-2 fuzzy inference system provides reasonable decisions for online users.
Perceptual reasoning for perceptual computing: a similarity-based approach Perceptual reasoning (PR) is an approximate reasoning method that can be used as a computing-with-words (CWW) engine in perceptual computing. There can be different approaches to implement PR, e.g., firing-interval-based PR (FI-PR), which has been proposed in J. M. Mendel and D. Wu, IEEE Trans. Fuzzy Syst., vol. 16, no. 6, pp. 1550-1564, Dec. 2008 and similarity-based PR (SPR), which is proposed in this paper. Both approaches satisfy the requirement on a CWW engine that the result of combining fired rules should lead to a footprint of uncertainty (FOU) that resembles the three kinds of FOUs in a CWW codebook. A comparative study shows that S-PR leads to output FOUs that resemble word FOUs, which are obtained from subject data, much more closely than FI-PR; hence, S-PR is a better choice for a CWW engine than FI-PR.
Concept Representation and Database Structures in Fuzzy Social Relational Networks We discuss the idea of fuzzy relationships and their role in modeling weighted social relational networks. The paradigm of computing with words is introduced, and the role that fuzzy sets play in representing linguistic concepts is described. We discuss how these technologies can provide a bridge between a network analyst's linguistic description of social network concepts and the formal model of the network. We then turn to some examples of taking an analyst's network concepts and formally representing them in terms of network properties. We first do this for the concept of clique and then for the idea of node importance. Finally, we introduce the idea of vector-valued nodes and begin developing a technology of social network database theory.
Selecting the advanced manufacturing technology using fuzzy multiple attributes group decision making with multiple fuzzy information Selection of advanced manufacturing technology in manufacturing system management is very important to determining manufacturing system competitiveness. This research develops a fuzzy multiple attribute decision-making applied in the group decision-making to improving advanced manufacturing technology selection process. Since numerous attributes have been considered in evaluating the manufacturing technology suitability, most information available in this stage is subjective, imprecise and vague, fuzzy sets theory provides a mathematical framework for modeling imprecision and vagueness. In the proposed approach, a new fusion method of fuzzy information is developed to managing information assessed in different linguistic scales (multi-granularity linguistic term sets) and numerical scales. The flexible manufacturing system adopted in the Taiwanese bicycle industry is employed in this study to demonstrate the computational process of the proposed method. Finally, sensitivity analysis can be performed to examine that the solution robustness.
Team Situation Awareness Using Web-Based Fuzzy Group Decision Support Systems Situation awareness (SA) is an important element to support responses and decision making to crisis problems. Decision making for a complex situation often needs a team to work cooperatively to get consensus awareness for the situation. Team SA is characterized including information sharing, opinion integration and consensus SA generation. In the meantime, various uncertainties are involved in team SA during information collection and awareness generation. Also, the collaboration between team members may be across distances and need web-based technology to facilitate. This paper presents a web-based fuzzy group decision support system (WFGDSS) and demonstrates how this system can provide a means of support for generating team SA in a distributed team work context with the ability of handling uncertain information.
Dealing with heterogeneous information in engineering evaluation processes Before selecting a design for a large engineering system several design proposals are evaluated studying different key aspects. In such a design assessment process, different criteria need to be evaluated, which can be of both of a quantitative and qualitative nature, and the knowledge provided by experts may be vague and/or incomplete. Consequently, the assessment problems may include different types of information (numerical, linguistic, interval-valued). Experts are usually forced to provide knowledge in the same domain and scale, resulting in higher levels of uncertainty. In this paper, we propose a flexible framework that can be used to model the assessment problems in different domains and scales. A fuzzy evaluation process in the proposed framework is investigated to deal with uncertainty and manage heterogeneous information in engineering evaluation processes.
An overview on the 2-tuple linguistic model for computing with words in decision making: Extensions, applications and challenges Many real world problems need to deal with uncertainty, therefore the management of such uncertainty is usually a big challenge. Hence, different proposals to tackle and manage the uncertainty have been developed. Probabilistic models are quite common, but when the uncertainty is not probabilistic in nature other models have arisen such as fuzzy logic and the fuzzy linguistic approach. The use of linguistic information to model and manage uncertainty has given good results and implies the accomplishment of processes of computing with words. A bird's eye view in the recent specialized literature about linguistic decision making, computing with words, linguistic computing models and their applications shows that the 2-tuple linguistic representation model [44] has been widely-used in the topic during the last decade. This use is because of reasons such as, its accuracy, its usefulness for improving linguistic solving processes in different applications, its interpretability, its ease managing of complex frameworks in which linguistic information is included and so forth. Therefore, after a decade of extensive and intensive successful use of this model in computing with words for different fields, it is the right moment to overview the model, its extensions, specific methodologies, applications and discuss challenges in the topic.
A signed-distance-based approach to importance assessment and multi-criteria group decision analysis based on interval type-2 fuzzy set. Interval type-2 fuzzy sets are associated with greater imprecision and more ambiguities than ordinary fuzzy sets. This paper presents a signed-distance-based method for determining the objective importance of criteria and handling fuzzy, multiple criteria group decision-making problems in a flexible and intelligent way. These advantages arise from the method’s use of interval type-2 trapezoidal fuzzy numbers to represent alternative ratings and the importance of various criteria. An integrated approach to determine the overall importance of the criteria is also developed using the subjective information provided by decision-makers and the objective information delivered by the decision matrix. In addition, a linear programming model is developed to estimate criterion weights and to extend the proposed multiple criteria decision analysis method. Finally, the feasibility and effectiveness of the proposed methods are illustrated by a group decision-making problem of patient-centered medicine in basilar artery occlusion.
A novel hybrid decision-making model for selecting locations in a fuzzy environment The criteria in multiple criteria decision-making (MCDM) problems often have independent and dependent characteristics simultaneously, so they cannot be evaluated by conventional additive or non-additive measures in real-life environments. This paper proposes a new hybrid MCDM model to solve location selection problems, and the results of solving MCDM problems tallied with real-life circumstances due to the use of two concepts in the new hybrid model. The concepts comprise a new structural model and a new evaluation method. The new structural modeling technique is used to draw the hierarchical/network framework, and the MCDM problem is solved using the proposed evaluation method. Here, the fuzzy ANP (analytic network process) is used to construct fuzzy weights of all criteria. Linguistic terms characterized by triangular fuzzy numbers are then used to denote the evaluation values of all alternatives versus various criteria. Finally, the aggregation fuzzy assessments of different alternatives are ranked to determine the best selection. Furthermore, this paper uses a numerical example for selecting the location of an international distribution center in Pacific Asia to illustrate the proposed method. Through this example, this paper demonstrates the applicability of the proposed method, and the results show that this method is an effective means for tackling MCDM problems.
From approximative to descriptive fuzzy classifiers This paper presents an effective and efficient approach for translating fuzzy classification rules that use approximative sets to rules that use descriptive sets and linguistic hedges of predefined meaning. It works by first generating rules that use approximative sets from training data, and then translating the resulting approximative rules into descriptive ones. Hedges that are useful for supporting such translations are provided. The translated rules are functionally equivalent to the original approximative ones, or a close equivalent given search time restrictions, while reflecting their underlying preconceived meaning. Thus, fuzzy, descriptive classifiers can be obtained by taking advantage of any existing approach to approximative modeling, which is generally efficient and accurate, while employing rules that are comprehensible to human users. Experimental results are provided and comparisons to alternative approaches given.
Compressive speech enhancement This paper presents an alternative approach to speech enhancement by using compressed sensing (CS). CS is a new sampling theory, which states that sparse signals can be reconstructed from far fewer measurements than the Nyquist sampling. As such, CS can be exploited to reconstruct only the sparse components (e.g., speech) from the mixture of sparse and non-sparse components (e.g., noise). This is possible because in a time-frequency representation, speech signal is sparse whilst most noise is non-sparse. Derivation shows that on average the signal to noise ratio (SNR) in the compressed domain is greater or equal than the uncompressed domain. Experimental results concur with the derivation and the proposed CS scheme achieves better or similar perceptual evaluation of speech quality (PESQ) scores and segmental SNR compared to other conventional methods in a wide range of input SNR.
An Evaluation of Parameterized Gradient Based Routing With QoE Monitoring for Multiple IPTV Providers. Future communication networks will be faced with increasing and variable traffic demand, due largely to various services introduced on the Internet. One particular service that will greatly impact resource management of future communication networks is IPTV, which aims to provide users with a multitude of multimedia services (e.g. HD and SD) for both live and on demand streaming. The impact of thi...
Generating realistic stimuli for accurate power grid analysis Power analysis tools are an integral component of any current power sign-off methodology. The performance of a design's power grid affects the timing and functionality of a circuit, directly impacting the overall performance. Ensuring power grid robustness implies taking into account, among others, static and dynamic effects of voltage drop, ground bounce, and electromigration. This type of verification is usually done by simulation, targeting a worst-case scenario where devices, switching almost simultaneously, could impose stern current demands on the power grid. While determination of the exact worst-case switching conditions from the grid perspective is usually not practical, the choice of simulation stimuli has a critical effect on the results of the analysis. Targetting safe but unrealistic settings could lead to pessimistic results and costly overdesigns in terms of die area. In this article we describe a software tool that generates a reasonable, realistic, set of stimuli for simulation. The approach proposed accounts for timing and spatial restrictions that arise from the circuit's netlist and placement and generates an approximation to the worst-case condition. The resulting stimuli indicate that only a fraction of the gates change in any given timing window, leading to a more robust verification methodology, especially in the dynamic case. Generating such stimuli is akin to performing a standard static timing analysis, so the tool fits well within conventional design frameworks. Furthermore, the tool can be used for hotspot detection in early design stages.
1.025514
0.011386
0.010406
0.010317
0.006885
0.005197
0.002663
0.000696
0.000233
0.000102
0.000009
0
0
0
Statistical Sampling-Based Parametric Analysis of Power Grids A statistical sampling-based parametric analysis is presented for analyzing large power grids in a "localized" fashion. By combining random walks with the notion of "importance sampling," the proposed technique is capable of efficiently computing the impacts of multiple circuit parameters on selected network nodes. A "new localized" sensitivity analysis is first proposed to solve not only the nominal node response but also its sensitivities with respect to multiple parameters using a single run of the random walks algorithm. This sampling-based technique is further extended from the first-order sensitivity analysis to a more general second-order analysis. By exploiting the natural spatial locality inherent in the proposed algorithm formulation, the second-order analysis can be performed efficiently even for a large number of global and local variation sources. The theoretical convergence properties of three importance sampling estimators for power grid analysis are presented, and their effectiveness is compared experimentally on several examples. The superior performance of the proposed technique is demonstrated by analyzing several large power grids under process and current loading variations to which the application of the existing brute-force simulation techniques becomes completely infeasible
Multigrid on GPU: Tackling Power Grid Analysis on parallel SIMT platforms The challenging task of analyzing on-chip power (ground) distribution networks with multi-million node complexity and beyond is key to today's large chip designs. For the first time, we show how to exploit recent massively parallel single-instruction multiple-thread (SIMT) based graphics processing unit (GPU) platforms to tackle power grid analysis with promising performance. Several key enablers including GPU-specific algorithm design, circuit topology transformation, workload partitioning, performance tuning are embodied in our GPU-accelerated hybrid multigrid algorithm, GpuHMD, and its implementation. In particular, a proper interplay between algorithm design and SIMT architecture consideration is shown to be essential to achieve good runtime performance. Different from the standard CPU based CAD development, care must be taken to balance between computing and memory access, reduce random memory access patterns and simplify flow control to achieve efficiency on the GPU platform. Extensive experiments on industrial and synthetic benchmarks have shown that the proposed GpuHMD engine can achieve 100X runtime speedup over a state-of-the-art direct solver and be more than 15X faster than the CPU based multigrid implementation. The DC analysis of a 1.6 million-node industrial power grid benchmark can be accurately solved in three seconds with less than 50MB memory on a commodity GPU. It is observed that the proposed approach scales favorably with the circuit complexity, at a rate about one second per million nodes.
Efficient large-scale power grid analysis based on preconditioned krylov-subspace iterative methods In this paper, we propose preconditioned Krylov-subspace iterative methods to perform efficient DC and transient simulations for large-scale linear circuits with an emphasis on power delivery circuits. We also prove that a circuit with inductors can be simplified from MNA to NA format, and the matrix becomes an s.p.d matrix. This property makes it suitable for the conjugate gradient with incomplete Cholesky decomposition as the preconditioner, which is faster than other direct and iterative methods. Extensive experimental results on large-scale industrial power grid circuits show that our method is over 200 times faster for DC analysis and around 10 times faster for transient simulation compared to SPICE3. Furthermore, our algorithm reduces over 75% of memory usage than SPICE3 while the accuracy is not compromised.
A multigrid-like technique for power grid analysis Modern submicron very large scale integration designs include huge power grids that are required to distribute large amounts of current, at increasingly lower voltages. The resulting voltage drop on the grid reduces noise margin and increases gate delay, resulting in a serious performance impact. Checking the integrity of the supply voltage using traditional circuit simulation is not practical, for reasons of time and memory complexity. The authors propose a novel multigrid-like technique for the analysis of power grids. The grid is reduced to a coarser structure, and the solution is mapped back to the original grid. Experimental results show that the proposed method is very efficient as well as suitable for both de and transient analysis of power grids.
Semantics of Context-Free Languages Meaning" may be assigned to a string in a context-free language by defining "at- tributes" of the symbols in a derivation tree for that string. The attributes can be de- fined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are "synthe- sized", i.e., defined solely in terms of attributes of the descendants of the correspond- ing nonterminal symbol, while other attributes are "inherited", i.e., defined in terms of attributes of the ancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature. A simple technique for specifying the "meaning" of languages defined by context-free grammars is introduced in Section 1 of this paper, and its basic mathematical properties are investigated in Sections 2 and 3. An example which indicates how the technique can be applied to the formal definition of programming languages is described in Section 4, and finally, Section 5 contains a somewhat biased comparison of the present method to other known techniques for semantic definition. The discussion in this paper is oriented primarily towards programming languages, but the same methods appear to be relevant also in the study of natural languages. 1. Introduction. Let us st/ppose that we want to give a precise defini- tion of binary notation for numbers. This can be done in many ways, and in this section we want to consider a manner o f definition which can be gen- eralized so that the meaning of other notations can be expressed in the same way. One such way to define binary notation is to base a definition on
Parameterized block-based statistical timing analysis with non-Gaussian parameters, nonlinear delay functions Variability of process parameters makes prediction of digital circuit timing characteristics an important and challenging problem in modern chip design. Recently, statistical static timing analysis (statistical STA) has been proposed as a solution. Unfortunately, the existing approaches either do not consider explicit gate delay dependence on process parameters (Liou, et al., 2001), (Orshansky, et al., 2002), (Devgan, et al., 2003), (Agarwal, et al., 2003) or restrict analysis to linear Gaussian parameters only (Visweswariah, et al., 2004), (Chang, et al., 2003). Here the authors extended the capabilities of parameterized block-based statistical STA (Visweswariah, et al., 2004) to handle nonlinear function of delays and non-Gaussian parameters, while retaining maximum efficiency of processing linear Gaussian parameters. The novel technique improves accuracy in predicting circuit timing characteristics and retains such benefits of parameterized block-based statistical STA as an incremental mode of operation, computation of criticality probabilities and sensitivities to process parameter variations. The authors' technique was implemented in an industrial statistical timing analysis tool. The experiments with large digital blocks showed both efficiency and accuracy of the proposed technique.
Fuzzy set methods for qualitative and natural language oriented simulation The author discusses the approach of using fuzzy set theory to create a formal way of viewing the qualitative simulation of models whose states, inputs, outputs, and parameters are uncertain. Simulation was performed using detailed and accurate models, and it was shown how input and output trajectories could reflect linguistic (or qualitative) changes in a system. Uncertain variables are encoded using triangular fuzzy numbers, and three distinct fuzzy simulation approaches (Monte Carlo, correlated and uncorrelated) are defined. The methods discussed are also valid for discrete event simulation; experiments have been performed on the fuzzy simulation of a single server queuing model. In addition, an existing C-based simulation toolkit, SimPack, was augmented to include the capabilities for modeling using fuzzy arithmetic and linguistic association, and a C++ class definition was coded for fuzzy number types
A training algorithm for optimal margin classifiers A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
A Bayesian approach to image expansion for improved definition. Accurate image expansion is important in many areas of image analysis. Common methods of expansion, such as linear and spline techniques, tend to smooth the image data at edge regions. This paper introduces a method for nonlinear image expansion which preserves the discontinuities of the original image, producing an expanded image with improved definition. The maximum a posteriori (MAP) estimation techniques that are proposed for noise-free and noisy images result in the optimization of convex functionals. The expanded images produced from these methods will be shown to be aesthetically and quantitatively superior to images expanded by the standard methods of replication, linear interpolation, and cubic B-spline expansion.
A comparative study of ranking methods, similarity measures and uncertainty measures for interval type-2 fuzzy sets Ranking methods, similarity measures and uncertainty measures are very important concepts for interval type-2 fuzzy sets (IT2 FSs). So far, there is only one ranking method for such sets, whereas there are many similarity and uncertainty measures. A new ranking method and a new similarity measure for IT2 FSs are proposed in this paper. All these ranking methods, similarity measures and uncertainty measures are compared based on real survey data and then the most suitable ranking method, similarity measure and uncertainty measure that can be used in the computing with words paradigm are suggested. The results are useful in understanding the uncertainties associated with linguistic terms and hence how to use them effectively in survey design and linguistic information processing.
Is Gauss Quadrature Better than Clenshaw-Curtis? We compare the convergence behavior of Gauss quadrature with that of its younger brother, Clenshaw-Curtis. Seven-line MATLAB codes are presented that implement both methods, and experiments show that the supposed factor-of-2 advantage of Gauss quadrature is rarely realized. Theorems are given to explain this effect. First, following O'Hara and Smith in the 1960s, the phenomenon is explained as a consequence of aliasing of coefficients in Chebyshev expansions. Then another explanation is offered based on the interpretation of a quadrature formula as a rational approximation of $\log((z+1)/(z-1))$ in the complex plane. Gauss quadrature corresponds to Padé approximation at $z=\infty$. Clenshaw-Curtis quadrature corresponds to an approximation whose order of accuracy at $z=\infty$ is only half as high, but which is nevertheless equally accurate near $[-1,1]$.
Opposites and Measures of Extremism in Concepts and Constructs We discuss the distinction between different types of opposites, i.e. negation and antonym, in terms of their representation by fuzzy subsets. The idea of a construct in terms of Kelly's theory of personal construct is discussed. A measure of the extremism of a group of elements with respect to concept and its negation, and with respect to a concept and its antonym is introduced.
Heden's bound on maximal partial spreads We prove Heden's result that the deficiency δ of a maximal partial spread in PG(3, q ) is greater than 1 + ½ (1+√5)√ q unless δ−1 is a multiple of p , where q=p n . When q is odd and not a square, we are able to improve this lower bound to roughly √3 q .
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.2
0.1
0.033333
0.016667
0
0
0
0
0
0
0
0
0
0
Evaluation of Machine Learning Methods for Natural Language Processing Tasks We show that the methodology currently in use for comparing symbolic supervised learning methods applied to human language technol- ogy tasks is unreliable. We show that the interaction between algorithm parameter settings and feature selection within a single algorithm often accounts for a higher variation in results than differences between different algorithms or information sources. We illustrate this with experiments on a number of linguistic datasets. The consequences of this phenomenon are far-reaching, and we discuss possible solutions to this methodological problem.
Coreference resolution using competition learning approach In this paper we propose a competition learning approach to coreference resolution. Traditionally, supervised machine learning approaches adopt the single-candidate model. Nevertheless the preference relationship between the antecedent candidates cannot be determined accurately in this model. By contrast, our approach adopts a twin-candidate learning model. Such a model can present the competition criterion for antecedent candidates reliably, and ensure that the most preferred candidate is selected. Furthermore, our approach applies a candidate filter to reduce the computational cost and data noises during training and resolution. The experimental results on MUC-6 and MUC-7 data set show that our approach can outperform those based on the single-candidate model.
GAMBL}, genetic algorithm optimization of memory-based {WSD
A Computational Model for Resolving Pronominal Anaphora in Turkish Using Hobbs' Naïve Algorithm
Anaphora resolution: a multi-strategy approach Anaphora resolution has proven to be a very difficult problem; it requires the integrated application of syntactic, semantic, and pragmatic knowledge. This paper examines the hypothesis that instead of attempting to construct a monolithic method for resolving anaphora, the combination of multiple strategies, each exploiting a different knowledge source, proves more effective - theoretically and computationally. Cognitive plausibility is established in that human judgements of the optimal anaphoric referent accord with those of the strategy-based method, and human inability to determine a unique referent corresponds to the cases where different strategies offer conflicting candidates for the anaphoric referent.
Providing a unified account of definite noun phrases in discourse
Detecting Faces in Images: A Survey Images containing faces are essential to intelligent vision-based human computer interaction, and research efforts in face processing include face recognition, face tracking, pose estimation, and expression recognition. However, many reported methods assume that the faces in an image or an image sequence have been identified and localized. To build fully automated systems that analyze the information contained in face images, robust and efficient face detection algorithms are required. Given a single image, the goal of face detection is to identify all image regions which contain a face regardless of its three-dimensional position, orientation, and lighting conditions. Such a problem is challenging because faces are nonrigid and have a high degree of variability in size, shape, color, and texture. Numerous techniques have been developed to detect faces in a single image, and the purpose of this paper is to categorize and evaluate these algorithms. We also discuss relevant issues such as data collection, evaluation metrics, and benchmarking. After analyzing these algorithms and identifying their limitations, we conclude with several promising directions for future research.
On the capacity of MIMO broadcast channels with partial side information In multiple-antenna broadcast channels, unlike point-to-point multiple-antenna channels, the multiuser capacity depends heavily on whether the transmitter knows the channel coefficients to each user. For instance, in a Gaussian broadcast channel with M transmit antennas and n single-antenna users, the sum rate capacity scales like Mloglogn for large n if perfect channel state information (CSI) is available at the transmitter, yet only logarithmically with M if it is not. In systems with large n, obtaining full CSI from all users may not be feasible. Since lack of CSI does not lead to multiuser gains, it is therefore of interest to investigate transmission schemes that employ only partial CSI. We propose a scheme that constructs M random beams and that transmits information to the users with the highest signal-to-noise-plus-interference ratios (SINRs), which can be made available to the transmitter with very little feedback. For fixed M and n increasing, the throughput of our scheme scales as MloglognN, where N is the number of receive antennas of each user. This is precisely the same scaling obtained with perfect CSI using dirty paper coding. We furthermore show that a linear increase in throughput with M can be obtained provided that M does not not grow faster than logn. We also study the fairness of our scheduling in a heterogeneous network and show that, when M is large enough, the system becomes interference dominated and the probability of transmitting to any user converges to 1/n, irrespective of its path loss. In fact, using M=αlogn transmit antennas emerges as a desirable operating point, both in terms of providing linear scaling of the throughput with M as well as in guaranteeing fairness.
Type-2 fuzzy ontology-based semantic knowledge for collision avoidance of autonomous underwater vehicles. The volume of obstacles encountered in the marine environment is rapidly increasing, which makes the development of collision avoidance systems more challenging. Several fuzzy ontology-based simulators have been proposed to provide a virtual platform for the analysis of maritime missions. However, due to the simulators’ limitations, ontology-based knowledge cannot be utilized to evaluate maritime robot algorithms and to avoid collisions. The existing simulators must be equipped with smart semantic domain knowledge to provide an efficient framework for the decision-making system of AUVs. This article presents type-2 fuzzy ontology-based semantic knowledge (T2FOBSK) and a simulator for marine users that will reduce experimental time and the cost of marine robots and will evaluate algorithms intelligently. The system reformulates the user’s query to extract the positions of AUVs and obstacles and convert them to a proper format for the simulator. The simulator uses semantic knowledge to calculate the degree of collision risk and to avoid obstacles. The available type-1 fuzzy ontology-based approach cannot extract intensively blurred data from the hazy marine environment to offer actual solutions. Therefore, we propose a type-2 fuzzy ontology to provide accurate information about collision risk and the marine environment during real-time marine operations. Moreover, the type-2 fuzzy ontology is designed using Protégé OWL-2 tools. The DL query and SPARQL query are used to evaluate the ontology. The distance to closest point of approach (DCPA), time to closest point of approach (TCPA) and variation of compass degree (VCD) are used to calculate the degree of collision risk between AUVs and obstacles. The experimental and simulation results show that the proposed architecture is highly efficient and highly productive for marine missions and the real-time decision-making system of AUVs.
Reduction and axiomization of covering generalized rough sets This paper investigates some basic properties of covering generalized rough sets, and their comparison with the corresponding ones of Pawlak's rough sets, a tool for data mining. The focus here is on the concepts and conditions for two coverings to generate the same covering lower approximation or the same covering upper approximation. The concept of reducts of coverings is introduced and the procedure to find a reduct for a covering is given. It has been proved that the reduct of a covering is the minimal covering that generates the same covering lower approximation or the same covering upper approximation, so this concept is also a technique to get rid of redundancy in data mining. Furthermore, it has been shown that covering lower and upper approximations determine each other. Finally, a set of axioms is constructed to characterize the covering lower approximation operation.
Sparse Reconstruction by Separable Approximation Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.
QoE Aware Service Delivery in Distributed Environment Service delivery and customer satisfaction are strongly related items for a correct commercial management platform. Technical aspects targeting this issue relate to QoS parameters that can be handled by the platform, at least partially. Subjective psychological issues and human cognitive aspects are typically unconsidered aspects and they directly determine the Quality of Experience (QoE). These factors finally have to be considered as key input for a successful business operation between a customer and a company. In our work, a multi-disciplinary approach is taken to propose a QoE interaction model based on the theoretical results from various fields including pyschology, cognitive sciences, sociology, service ecosystem and information technology. In this paper a QoE evaluator is described for assessing the service delivery in a distributed and integrated environment on per user and per service basis.
A model to perform knowledge-based temporal abstraction over multiple signals In this paper we propose the Multivariable Fuzzy Temporal Profile model (MFTP), which enables the projection of expert knowledge on a physical system over a computable description. This description may be used to perform automatic abstraction on a set of parameters that represent the temporal evolution of the system. This model is based on the constraint satisfaction problem (CSP)formalism, which enables an explicit representation of the knowledge, and on fuzzy set theory, from which it inherits the ability to model the imprecision and uncertainty that are characteristic of human knowledge vagueness. We also present an application of the MFTP model to the recognition of landmarks in mobile robotics, specifically to the detection of doors on ultrasound sensor signals from a Nomad 200 robot.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.131872
0.16
0.16
0.16
0.107705
0.066388
0
0
0
0
0
0
0
0
Padé-Legendre approximants for uncertainty analysis with discontinuous response surfaces A novel uncertainty propagation method for problems characterized by highly non-linear or discontinuous system responses is presented. The approach is based on a Pade-Legendre (PL) formalism which does not require modifications to existing computational tools (non-intrusive approach) and it is a global method. The paper presents a novel PL method for problems in multiple dimensions, which is non-trivial in the Pade literature. In addition, a filtering procedure is developed in order to minimize the errors introduced in the approximation close to the discontinuities. The numerical examples include fluid dynamic problems characterized by shock waves: a simple dual throat nozzle problem with uncertain initial state, and the turbulent transonic flow over a transonic airfoil where the flight conditions are assumed to be uncertain. Results are presented in terms of statistics of both shock position and strength and are compared to Monte Carlo simulations.
Segmentation of Stochastic Images using Level Set Propagation with Uncertain Speed We present an approach for the evolution of level sets under an uncertain velocity leading to stochastic level sets. The uncertain velocity can either be a random variable or a random field, i.e. a spatially varying random quantity, and it may result from measurement errors, noise, unknown material parameters or other sources of uncertainty. The use of stochastic level sets for the segmentation of images with uncertain gray values leads to stochastic domains, because the zero level set is not a single closed curve anymore. Instead, we have a band of possibly infinite thickness which contains all possible locations of the zero level set under the uncertainty. Thus, the approach allows for a probabilistic description of the segmented volume and the shape of the object. Due to numerical reasons, we use a parabolic approximation of the stochastic level set equation, which is a stochastic partial differential equation, and discretized the equation using the polynomial chaos and a stochastic finite difference scheme. For the verification of the intrusive discretization in the polynomial chaos we performed Monte Carlo and Stochastic Collocation simulations. We demonstrate the power of the stochastic level set approach by showing examples ranging from artificial tests to demonstrate individual aspects to a segmentation of objects in medical images.
Numerical analysis of the Burgers' equation in the presence of uncertainty The Burgers' equation with uncertain initial and boundary conditions is investigated using a polynomial chaos (PC) expansion approach where the solution is represented as a truncated series of stochastic, orthogonal polynomials. The analysis of well-posedness for the system resulting after Galerkin projection is presented and follows the pattern of the corresponding deterministic Burgers equation. The numerical discretization is based on spatial derivative operators satisfying the summation by parts property and weak boundary conditions to ensure stability. Similarly to the deterministic case, the explicit time step for the hyperbolic stochastic problem is proportional to the inverse of the largest eigenvalue of the system matrix. The time step naturally decreases compared to the deterministic case since the spectral radius of the continuous problem grows with the number of polynomial chaos coefficients. An estimate of the eigenvalues is provided. A characteristic analysis of the truncated PC system is presented and gives a qualitative description of the development of the system over time for different initial and boundary conditions. It is shown that a precise statistical characterization of the input uncertainty is required and partial information, e.g. the expected values and the variance, are not sufficient to obtain a solution. An analytical solution is derived and the coefficients of the infinite PC expansion are shown to be smooth, while the corresponding coefficients of the truncated expansion are discontinuous.
Efficient Localization of Discontinuities in Complex Computational Simulations. Surrogate models for computational simulations are input-output approximations that allow computationally intensive analyses, such as uncertainty propagation and inference, to be performed efficiently. When a simulation output does not depend smoothly on its inputs, the error and convergence rate of many approximation methods deteriorate substantially. This paper details a method for efficiently localizing discontinuities in the input parameter domain, so that the model output can be approximated as a piecewise smooth function. The approach comprises an initialization phase, which uses polynomial annihilation to assign function values to different regions and thus seed an automated labeling procedure, followed by a refinement phase that adaptively updates a kernel support vector machine representation of the separating surface via active learning. The overall approach avoids structured grids and exploits any available simplicity in the geometry of the separating surface, thus reducing the number of model evaluations required to localize the discontinuity. The method is illustrated on examples of up to eleven dimensions, including algebraic models and ODE/PDE systems, and demonstrates improved scaling and efficiency over other discontinuity localization approaches.
Simplex Stochastic Collocation with Random Sampling and Extrapolation for Nonhypercube Probability Spaces Stochastic collocation (SC) methods for uncertainty quantification (UQ) in computational problems are usually limited to hypercube probability spaces due to the structured grid of their quadrature rules. Nonhypercube probability spaces with an irregular shape of the parameter domain do, however, occur in practical engineering problems. For example, production tolerances and other geometrical uncertainties can lead to correlated random inputs on nonhypercube domains. In this paper, a simplex stochastic collocation (SSC) method is introduced, as a multielement UQ method based on simplex elements, that can efficiently discretize nonhypercube probability spaces. It combines the Delaunay triangulation of randomized sampling at adaptive element refinements with polynomial extrapolation to the boundaries of the probability domain. The robustness of the extrapolation is quantified by the definition of the essentially extremum diminishing (EED) robustness principle. Numerical examples show that the resulting SSC-EED method achieves superlinear convergence and a linear increase of the initial number of samples with increasing dimensionality. These properties are demonstrated for uniform and nonuniform distributions, and correlated and uncorrelated parameters in problems with 15 dimensions and discontinuous responses.
Multi-Resolution-Analysis Scheme for Uncertainty Quantification in Chemical Systems This paper presents a multi-resolution approach for the propagation of parametric uncertainty in chemical systems. It is motivated by previous studies where Galerkin formulations of Wiener-Hermite expansions were found to fail in the presence of steep dependences of the species concentrations with regard to the reaction rates. The multi-resolution scheme is based on representation of the uncertain concentration in terms of compact polynomial multi-wavelets, allowing for the control of the convergence in terms of polynomial order and resolution level. The resulting representation is shown to greatly improve the robustness of the Galerkin procedure in presence of steep dependences. However, this improvement comes with a higher computational cost which drastically increases with the number of uncertain reaction rates. To overcome this drawback an adaptive strategy is proposed to control locally (in the parameter space) and in time the resolution level. The efficiency of the method is demonstrated for an uncertain chemical system having eight random parameters.
A stochastic particle-mesh scheme for uncertainty propagation in vortical flows A new mesh-particle scheme is constructed for uncertainty propagation in vortical flow. The scheme is based on the incorporation of polynomial chaos (PC) expansions into a Lagrangian particle approximation of the Navier–Stokes equations. The main idea of the method is to use a unique set of particles to transport the stochastic modes of the solution. The particles are transported by the mean velocity field, while their stochastic strengths are updated to account for diffusive and convective effects induced by the coupling between stochastic modes. An integral treatment is used for the evaluation of the coupled stochastic terms, following the framework of the particle strength exchange (PSE) methods, which yields a conservative algorithm. It is also shown that it is possible to apply solution algorithms used in deterministic setting, including particle-mesh techniques and particle remeshing. Thus, the method combines the advantages of particles discretizations with the efficiency of PC representations. Validation of the method on uncertain diffusion and convection problems is first performed. An example is then presented of natural convection of a hot patch of fluid in infinite domain, and the computations are used to illustrate the effectiveness of the approach for both large number of particles and high-order PC expansions.
A stochastic Lagrangian approach for geometrical uncertainties in electrostatics This work proposes a general framework to quantify uncertainty arising from geometrical variations in the electrostatic analysis. The uncertainty associated with geometry is modeled as a random field which is first expanded using either polynomial chaos or Karhunen–Loève expansion in terms of independent random variables. The random field is then treated as a random displacement applied to the conductors defined by the mean geometry, to derive the stochastic Lagrangian boundary integral equation. The surface charge density is modeled as a random field, and is discretized both in the random dimension and space using polynomial chaos and classical boundary element method, respectively. Various numerical examples are presented to study the effect of uncertain geometry on relevant parameters such as capacitance and net electrostatic force. The results obtained using the proposed method are verified using rigorous Monte Carlo simulations. It has been shown that the proposed method accurately predicts the statistics and probability density functions of various relevant parameters.
Algorithm 672: generation of interpolatory quadrature rules of the highest degree of precision with preassigned nodes for general weight functions
Stochastic integral equation solver for efficient variation-aware interconnect extraction In this paper we present an efficient algorithm for extracting the complete statistical distribution of the input impedance of interconnect structures in the presence of a large number of random geometrical variations. The main contribution in this paper is the development of a new algorithm, which combines both Neumann expansion and Hermite expansion, to accurately and efficiently solve stochastic linear system of equations. The second contribution is a new theorem to efficiently obtain the coefficients of the Hermite expansion while computing only low order integrals. We establish the accuracy of the proposed algorithm by solving stochastic linear systems resulting from the discretization of the stochastic volume integral equation and comparing our results to those obtained from other techniques available in the literature, such as Monte Carlo and stochastic finite element analysis. We further prove the computational efficiency of our algorithm by solving large problems that are not solvable using the current state of the art.
Kronecker compressive sensing. Compressive sensing (CS) is an emerging approach for the acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1-D signals and 2-D images, many important applications involve multidimensional signals; the construction of sparsifying bases and measurement systems for such signals is complicated by their higher dimensionality. In this paper, we propose the use of Kronecker product matrices in CS for two purposes. First, such matrices can act as sparsifying bases that jointly model the structure present in all of the signal dimensions. Second, such matrices can represent the measurement protocols used in distributed settings. Our formulation enables the derivation of analytical bounds for the sparse approximation of multidimensional signals and CS recovery performance, as well as a means of evaluating novel distributed measurement schemes.
Random projection trees and low dimensional manifolds We present a simple variant of the k-d tree which automatically adapts to intrinsic low dimensional structure in data without having to explicitly learn this structure.
Temporal reasoning in medical expert systems Temporal reasoning is specially important for medical expert systems, as the final diagnosis is often strongly affected by the sequence in which the symptoms develop. Current research on time structures in artificial intelligence is reviewed, and a temporal model based on fuzzy set theory is developed. The proposed model allows a simple and natural representation of symptoms, and provides for efficient computation of temporal relationships between symptoms. The applicability of the proposed temporal model to expert systems is demonstrated
Bousi~Prolog - A Fuzzy Logic Programming Language for Modeling Vague Knowledge and Approximate Reasoning
1.035653
0.033333
0.018716
0.016667
0.008261
0.00483
0.000859
0.000254
0.000021
0.000004
0
0
0
0
Type 2 fuzzy neural networks: an interpretation based on fuzzy inference neural networks with fuzzy parameters It is shown, in this paper, that the NEFCON, NEFCLASS, and NEFPROX systems can be viewed as equivalent to the RBF-like neuro-fuzzy systems. In addition, they can be considered as type 2 networks. Analogously to these systems, a concept of type 2 fuzzy neural networks is proposed
A Type 2 Adaptive Fuzzy Inferencing System Far standard fuzzy sets, type I fuzzy sets, the membership function is represented by numbers or some function whose parameters have to be determined by knowledge acquisition or some 'learning' algorithm. Type I fuzzy sets make the assumption that there is full certainty in these representations. In most applications this is not the case and can be considered a serious shortcoming with the approach. As an alternative, type 2 fuzzy sets allow for linguistic grades of membership and, therefore, present a better representation of the 'fuzziness', when applied to a particular problem, than type I fuzzy sets.With a type 2 representation, there is no requirement to ask an expert for numerical membership grades - they can be linguistic However the associated cost is that the fuzzy membership grades and rules have somehow to be determined and no recognised approach yet exists. For type I systems a number of approaches have been adopted. One in particular is the Adaptive Network Based Fuzzy Inferencing System (ANFIS) which has successfully been applied to a variety of applications. ANFIS takes domain data and learns the membership functions and rules for a type 1 fuzzy inferencing system. Our work aims to extend this approach for type 2 systems.This paper presents this work. Our Type 2 Adaptive Fuzzy Inferencing System has inputs that are linguistic variables (rather than numbers) and the membership functions for these fuzzy grades are learnt from the relationship between these inputs and the given output. The paper describes the algorithm developed highlighting the theoretical and computational issues involved.
Some Properties of Fuzzy Sets of Type 2
Interval type-2 fuzzy logic systems: theory and design We present the theory and design of interval type-2 fuzzy logic systems (FLSs). We propose an efficient and simplified method to compute the input and antecedent operations for interval type-2 FLSs: one that is based on a general inference formula for them. We introduce the concept of upper and lower membership functions (MFs) and illustrate our efficient inference method for the case of Gaussian primary MFs. We also propose a method for designing an interval type-2 FLS in which we tune its parameters. Finally, we design type-2 FLSs to perform time-series forecasting when a nonstationary time-series is corrupted by additive noise where SNR is uncertain and demonstrate an improved performance over type-1 FLSs
A neural fuzzy system with linguistic teaching signals A neural fuzzy system learning with linguistic teaching signals is proposed. This system is able to process and learn numerical information as well as linguistic information. It can be used either as an adaptive fuzzy expert system or as an adaptive fuzzy controller. First, we propose a five-layered neural network for the connectionist realization of a fuzzy inference system. The connectionist structure can house fuzzy logic rules and membership functions for fuzzy inference. We use α-level sets of fuzzy numbers to represent linguistic information. The inputs, outputs, and weights of the proposed network can be fuzzy numbers of any shape. Furthermore, they can be hybrid of fuzzy numbers and numerical numbers through the use of fuzzy singletons. Based on interval arithmetics, two kinds of learning schemes are developed for the proposed system: fuzzy supervised learning and fuzzy reinforcement learning. Simulation results are presented to illustrate the performance and applicability of the proposed system
Control of a nonlinear continuous bioreactor with bifurcation by a type-2 fuzzy logic controller The object of this paper is the application of a type-2 fuzzy logic controller to a nonlinear system that presents bifurcations. A bifurcation can cause instability in the system or can create new working conditions which, although stable, are unacceptable. The only practical solution for an efficient control is the use of high performance controllers that take into account the uncertainties of the process. A type-2 fuzzy logic controller is tested by simulation on a nonlinear bioreactor system that is characterized by a transcritical bifurcation. Simulation results show the validity of the proposed controllers in preventing the system from reaching bifurcation and instable or undesirable stable conditions.
Fuzzy logic systems for engineering: a tutorial A fuzzy logic system (FLS) is unique in that it is able to simultaneously handle numerical data and linguistic knowledge. It is a nonlinear mapping of an input data (feature) vector into a scalar output, i.e., it maps numbers into numbers. Fuzzy set theory and fuzzy logic establish the specifics of the nonlinear mapping. This tutorial paper provides a guided tour through those aspects of fuzzy sets and fuzzy logic that are necessary to synthesize an FLS. It does this by starting with crisp set theory and dual logic and demonstrating how both can be extended to their fuzzy counterparts. Because engineering systems are, for the most part, causal, we impose causality as a constraint on the development of the FLS. After synthesizing a FLS, we demonstrate that it can be expressed mathematically as a linear combination of fuzzy basis functions, and is a nonlinear universal function approximator, a property that it shares with feedforward neural networks. The fuzzy basis function expansion is very powerful because its basis functions can be derived from either numerical data or linguistic knowledge, both of which can be cast into the forms of IF-THEN rules
On the derivation of memberships for fuzzy sets in expert systems The membership function of a fuzzy set is the cornerstone upon which fuzzy set theory has evolved. The question of where these membership functions come from or how they are derived must be answered. Expert systems commonly deal with fuzzy sets and must use valid membership functions. This paper puts forth a method for constructing a membership function for the fuzzy sets that expert systems deal with. The function may be found by querying the appropriate group and using fuzzy statistics. The concept of a group is defined in this context, as well as a measure of goodness for a membership function. The commonality and differences between membership function for a fuzzy set and probabilistic functions are shown. The systematic methodology presented will facilitate effective use of expert systems.
Statistical confidence intervals for fuzzy data The application of fuzzy sets theory to statistical confidence intervals for unknown fuzzy parameters is proposed in this paper by considering fuzzy random variables. In order to obtain the belief degrees under the sense of fuzzy sets theory, we transform the original problem into the optimization problems. We provide the computational procedure to solve the optimization problems. A numerical example is also provided to illustrate the possible application of fuzzy sets theory to statistical confidence intervals.
On intuitionistic gradation of openness In this paper, we introduce a concept of intuitionistic gradation of openness on fuzzy subsets of a nonempty set X and define an intuitionistic fuzzy topological space. We prove that the category of intuitionistic fuzzy topological spaces and gradation preserving mappings is a topological category. We study compactness of intuitionistic fuzzy topological spaces and prove an analogue of Tychonoff's theorem.
Directional relative position between objects in image processing: a comparison between fuzzy approaches The importance of describing relationships between objects has been highlighted in works in very different areas, including image understanding. Among these relationships, directional relative position relations are important since they provide an important information about the spatial arrangement of objects in the scene. Such concepts are rather ambiguous, they defy precise definitions, but human beings have a rather intuitive and common way of understanding and interpreting them. Therefore in this context, fuzzy methods are appropriate to provide consistent definitions that integrate both quantitative and qualitative knowledge, thus providing a computational representation and interpretation of imprecise spatial relations, expressed in a linguistic way, and including quantitative knowledge. Several fuzzy approaches have been developed in the literature, and the aim of this paper is to review and compare them according to their properties and according to the types of questions they seek to answer.
Bayesian compressive sensing and projection optimization This paper introduces a new problem for which machine-learning tools may make an impact. The problem considered is termed "compressive sensing", in which a real signal of dimension N is measured accurately based on K real measurements. This is achieved under the assumption that the underlying signal has a sparse representation in some basis (e.g., wavelets). In this paper we demonstrate how techniques developed in machine learning, specifically sparse Bayesian regression and active learning, may be leveraged to this new problem. We also point out future research directions in compressive sensing of interest to the machine-learning community.
Statistical multilayer process space coverage for at-speed test Increasingly large process variations make selection of a set of critical paths for at-speed testing essential yet challenging. This paper proposes a novel multilayer process space coverage metric to quantitatively gauge the quality of path selection. To overcome the exponential complexity in computing such a metric, this paper reveals its relationship to a concept called order statistics for a set of correlated random variables, efficient computation of which is a hitherto open problem in the literature. This paper then develops an elegant recursive algorithm to compute the order statistics (or the metric) in provable linear time and space. With a novel data structure, the order statistics can also be incrementally updated. By employing a branch-and-bound path selection algorithm with above techniques, this paper shows that selecting an optimal set of paths for a multi-million-gate design can be performed efficiently. Compared to the state-of-the-art, experimental results show both the efficiency of our algorithms and better quality of our path selection.
Stochastic Behavioral Modeling and Analysis for Analog/Mixed-Signal Circuits It has become increasingly challenging to model the stochastic behavior of analog/mixed-signal (AMS) circuits under large-scale process variations. In this paper, a novel moment-matching-based method has been proposed to accurately extract the probabilistic behavioral distributions of AMS circuits. This method first utilizes Latin hypercube sampling coupling with a correlation control technique to generate a few samples (e.g., sample size is linear with number of variable parameters) and further analytically evaluate the high-order moments of the circuit behavior with high accuracy. In this way, the arbitrary probabilistic distributions of the circuit behavior can be extracted using moment-matching method. More importantly, the proposed method has been successfully applied to high-dimensional problems with linear complexity. The experiments demonstrate that the proposed method can provide up to 1666X speedup over crude Monte Carlo method for the same accuracy.
1.203265
0.034017
0.001804
0.001568
0.000627
0.000257
0.000127
0.000041
0.000015
0.000004
0
0
0
0
Exploiting active subspaces to quantify uncertainty in the numerical simulation of the HyShot II scramjet. We present a computational analysis of the reactive flow in a hypersonic scramjet engine with focus on effects of uncertainties in the operating conditions. We employ a novel methodology based on active subspaces to characterize the effects of the input uncertainty on the scramjet performance. The active subspace identifies one-dimensional structure in the map from simulation inputs to quantity of interest that allows us to reparameterize the operating conditions; instead of seven physical parameters, we can use a single derived active variable. This dimension reduction enables otherwise infeasible uncertainty quantification, considering the simulation cost of roughly 9500 CPU-hours per run. For two values of the fuel injection rate, we use a total of 68 simulations to (i) identify the parameters that contribute the most to the variation in the output quantity of interest, (ii) estimate upper and lower bounds on the quantity of interest, (iii) classify sets of operating conditions as safe or unsafe corresponding to a threshold on the output quantity of interest, and (iv) estimate a cumulative distribution function for the quantity of interest.
Multidimensional Adaptive Relevance Vector Machines for Uncertainty Quantification. We develop a Bayesian uncertainty quantification framework using a local binary tree surrogate model that is able to make use of arbitrary Bayesian regression methods. The tree is adaptively constructed using information about the sensitivity of the response and is biased by the underlying input probability distribution. The local Bayesian regressions are based on a reformulation of the relevance vector machine model that accounts for the multiple output dimensions. A fast algorithm for training the local models is provided. The methodology is demonstrated with examples in the solution of stochastic differential equations.
Bayesian Deep Convolutional Encoder-Decoder Networks for Surrogate Modeling and Uncertainty Quantification. We are interested in the development of surrogate models for uncertainty quantification and propagation in problems governed by stochastic PDEs using a deep convolutional encoder–decoder network in a similar fashion to approaches considered in deep learning for image-to-image regression tasks. Since normal neural networks are data-intensive and cannot provide predictive uncertainty, we propose a Bayesian approach to convolutional neural nets. A recently introduced variational gradient descent algorithm based on Stein's method is scaled to deep convolutional networks to perform approximate Bayesian inference on millions of uncertain network parameters. This approach achieves state of the art performance in terms of predictive accuracy and uncertainty quantification in comparison to other approaches in Bayesian neural networks as well as techniques that include Gaussian processes and ensemble methods even when the training data size is relatively small. To evaluate the performance of this approach, we consider standard uncertainty quantification tasks for flow in heterogeneous media using limited training data consisting of permeability realizations and the corresponding velocity and pressure fields. The performance of the surrogate model developed is very good even though there is no underlying structure shared between the input (permeability) and output (flow/pressure) fields as is often the case in the image-to-image regression models used in computer vision problems. Studies are performed with an underlying stochastic input dimensionality up to 4225 where most other uncertainty quantification methods fail. Uncertainty propagation tasks are considered and the predictive output Bayesian statistics are compared to those obtained with Monte Carlo estimates.
Multi-output separable Gaussian process: Towards an efficient, fully Bayesian paradigm for uncertainty quantification Computer codes simulating physical systems usually have responses that consist of a set of distinct outputs (e.g., velocity and pressure) that evolve also in space and time and depend on many unknown input parameters (e.g., physical constants, initial/boundary conditions, etc.). Furthermore, essential engineering procedures such as uncertainty quantification, inverse problems or design are notoriously difficult to carry out mostly due to the limited simulations available. The aim of this work is to introduce a fully Bayesian approach for treating these problems which accounts for the uncertainty induced by the finite number of observations. Our model is built on a multi-dimensional Gaussian process that explicitly treats correlations between distinct output variables as well as space and/or time. The proper use of a separable covariance function enables us to describe the huge covariance matrix as a Kronecker product of smaller matrices leading to efficient algorithms for carrying out inference and predictions. The novelty of this work, is the recognition that the Gaussian process model defines a posterior probability measure on the function space of possible surrogates for the computer code and the derivation of an algorithmic procedure that allows us to sample it efficiently. We demonstrate how the scheme can be used in uncertainty quantification tasks in order to obtain error bars for the statistics of interest that account for the finite number of observations.
Uncertainty quantification via random domain decomposition and probabilistic collocation on sparse grids Quantitative predictions of the behavior of many deterministic systems are uncertain due to ubiquitous heterogeneity and insufficient characterization by data. We present a computational approach to quantify predictive uncertainty in complex phenomena, which is modeled by (partial) differential equations with uncertain parameters exhibiting multi-scale variability. The approach is motivated by flow in random composites whose internal architecture (spatial arrangement of constitutive materials) and spatial variability of properties of each material are both uncertain. The proposed two-scale framework combines a random domain decomposition (RDD) and a probabilistic collocation method (PCM) on sparse grids to quantify these two sources of uncertainty, respectively. The use of sparse grid points significantly reduces the overall computational cost, especially for random processes with small correlation lengths. A series of one-, two-, and three-dimensional computational examples demonstrate that the combined RDD-PCM approach yields efficient, robust and non-intrusive approximations for the statistics of diffusion in random composites.
An Anisotropic Sparse Grid Stochastic Collocation Method for Partial Differential Equations with Random Input Data This work proposes and analyzes an anisotropic sparse grid stochastic collocation method for solving partial differential equations with random coefficients and forcing terms (input data of the model). The method consists of a Galerkin approximation in the space variables and a collocation, in probability space, on sparse tensor product grids utilizing either Clenshaw-Curtis or Gaussian knots. Even in the presence of nonlinearities, the collocation approach leads to the solution of uncoupled deterministic problems, just as in the Monte Carlo method. This work includes a priori and a posteriori procedures to adapt the anisotropy of the sparse grids to each given problem. These procedures seem to be very effective for the problems under study. The proposed method combines the advantages of isotropic sparse collocation with those of anisotropic full tensor product collocation: the first approach is effective for problems depending on random variables which weigh approximately equally in the solution, while the benefits of the latter approach become apparent when solving highly anisotropic problems depending on a relatively small number of random variables, as in the case where input random variables are Karhunen-Loève truncations of “smooth” random fields. This work also provides a rigorous convergence analysis of the fully discrete problem and demonstrates (sub)exponential convergence in the asymptotic regime and algebraic convergence in the preasymptotic regime, with respect to the total number of collocation points. It also shows that the anisotropic approximation breaks the curse of dimensionality for a wide set of problems. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo. In particular, for moderately large-dimensional problems, the sparse grid approach with a properly chosen anisotropy seems to be very efficient and superior to all examined methods.
Physical Systems with Random Uncertainties: Chaos Representations with Arbitrary Probability Measure The basic random variables on which random uncertainties can in a given model depend can be viewed as defining a measure space with respect to which the solution to the mathematical problem can be defined. This measure space is defined on a product measure associated with the collection of basic random variables. This paper clarifies the mathematical structure of this space and its relationship to the underlying spaces associated with each of the random variables. Cases of both dependent and independent basic random variables are addressed. Bases on the product space are developed that can be viewed as generalizations of the standard polynomial chaos approximation. Moreover, two numerical constructions of approximations in this space are presented along with the associated convergence analysis.
Learning and classification of monotonic ordinal concepts
Proactive secret sharing or: How to cope with perpetual leakage Secret sharing schemes protect secrets by distributing them over different locations (share holders). In particular, in k out of n threshold schemes, security is assured if throughout the entire life-time of the secret the adversary is restricted to compromise less than k of the n locations. For long-lived and sensitive secrets this protection may be insufficient. We propose an efficient proactive secret sharing scheme, where shares are periodically renewed (without changing the secret) in such it way that information gained by the adversary in one time period is useless for attacking the secret after the shares are renewed. Hence, the adversary willing to learn the secret needs to break to all k locations during the same time period (e.g., one day, a week, etc.). Furthermore, in order to guarantee the availability and integrity of the secret, we provide mechanisms to detect maliciously (or accidentally) corrupted shares, as well as mechanisms to secretly recover the correct shares when modification is detected.
Incremental criticality and yield gradients Criticality and yield gradients are two crucial diagnostic metrics obtained from Statistical Static Timing Analysis (SSTA). They provide valuable information to guide timing optimization and timing-driven physical synthesis. Existing work in the literature, however, computes both metrics in a non-incremental manner, i.e., after one or more changes are made in a previously-timed circuit, both metrics need to be recomputed from scratch, which is obviously undesirable for optimizing large circuits. The major contribution of this paper is to propose two novel techniques to compute both criticality and yield gradients efficiently and incrementally. In addition, while node and edge criticalities are addressed in the literature, this paper for the first time describes a technique to compute path criticalities. To further improve algorithmic efficiency, this paper also proposes a novel technique to update "chip slack" incrementally. Numerical results show our methods to be over two orders of magnitude faster than previous work.
Compressive speech enhancement This paper presents an alternative approach to speech enhancement by using compressed sensing (CS). CS is a new sampling theory, which states that sparse signals can be reconstructed from far fewer measurements than the Nyquist sampling. As such, CS can be exploited to reconstruct only the sparse components (e.g., speech) from the mixture of sparse and non-sparse components (e.g., noise). This is possible because in a time-frequency representation, speech signal is sparse whilst most noise is non-sparse. Derivation shows that on average the signal to noise ratio (SNR) in the compressed domain is greater or equal than the uncompressed domain. Experimental results concur with the derivation and the proposed CS scheme achieves better or similar perceptual evaluation of speech quality (PESQ) scores and segmental SNR compared to other conventional methods in a wide range of input SNR.
Hierarchical statistical characterization of mixed-signal circuits using behavioral modeling A methodology for hierarchical statistical circuit characterization which does not rely upon circuit-level Monte Carlo simulation is presented. The methodology uses principal component analysis, response surface methodology, and statistics to directly calculate the statistical distributions of higher-level parameters from the distributions of lower-level parameters. We have used the methodology to characterize a folded cascode operational amplifier and a phase-locked loop. This methodology permits the statistical characterization of large analog and mixed-signal systems, many of which are extremely time-consuming or impossible to characterize using existing methods.
Dominance-based fuzzy rough set analysis of uncertain and possibilistic data tables In this paper, we propose a dominance-based fuzzy rough set approach for the decision analysis of a preference-ordered uncertain or possibilistic data table, which is comprised of a finite set of objects described by a finite set of criteria. The domains of the criteria may have ordinal properties that express preference scales. In the proposed approach, we first compute the degree of dominance between any two objects based on their imprecise evaluations with respect to each criterion. This results in a valued dominance relation on the universe. Then, we define the degree of adherence to the dominance principle by every pair of objects and the degree of consistency of each object. The consistency degrees of all objects are aggregated to derive the quality of the classification, which we use to define the reducts of a data table. In addition, the upward and downward unions of decision classes are fuzzy subsets of the universe. Thus, the lower and upper approximations of the decision classes based on the valued dominance relation are fuzzy rough sets. By using the lower approximations of the decision classes, we can derive two types of decision rules that can be applied to new decision cases.
Performance and Quality Evaluation of a Personalized Route Planning System Advanced personalization of database applications is a big challenge, in particular for distributed mo- bile environments. We present several new results from a prototype of a route planning system. We demonstrate how to combine qualitative and quantitative preferences gained from situational aspects and from personal user preferences. For performance studies we a nalyze the runtime efficiency of the SR-Combine algorithm used to evaluate top-k queries. By determining the cost-ratio of random to sorted accesses SR-Combine can automati- cally tune its performance within the given system architecture. Top-k queries are generated by mapping linguis- tic variables to numerical weightings. Moreover, we analyze the quality of the query results by several test se- ries, systematically varying the mappings of the linguistic variables. We report interesting insights into this rather under-researched important topic. More investigations, incorporating also cognitive issues, need to be conducted in the future.
1.24
0.24
0.12
0.0325
0.006667
0.000393
0.000002
0
0
0
0
0
0
0
Interval-Valued Fuzzy Sets In Soft Computing In this work, we explain the reasons for which, for some specific problems, interval-valued fuzzy sets must be considered a basic component of Soft Computing.
Optimization of type-2 fuzzy systems based on bio-inspired methods: A concise review A review of the optimization methods used in the design of type-2 fuzzy systems, which are relatively novel models of imprecision, has been considered in this work. The fundamental focus of the work has been based on the basic reasons of the need for optimizing type-2 fuzzy systems for different areas of application. Recently, bio-inspired methods have emerged as powerful optimization algorithms for solving complex problems. In the case of designing type-2 fuzzy systems for particular applications, the use of bio-inspired optimization methods have helped in the complex task of finding the appropriate parameter values and structure of the fuzzy systems. In this review, we consider the application of genetic algorithms, particle swarm optimization and ant colony optimization as three different paradigms that help in the design of optimal type-2 fuzzy systems. We also provide a comparison of the different optimization methods for the case of designing type-2 fuzzy systems.
A review on the design and optimization of interval type-2 fuzzy controllers A review of the methods used in the design of interval type-2 fuzzy controllers has been considered in this work. The fundamental focus of the work is based on the basic reasons for optimizing type-2 fuzzy controllers for different areas of application. Recently, bio-inspired methods have emerged as powerful optimization algorithms for solving complex problems. In the case of designing type-2 fuzzy controllers for particular applications, the use of bio-inspired optimization methods have helped in the complex task of finding the appropriate parameter values and structure of the fuzzy systems. In this review, we consider the application of genetic algorithms, particle swarm optimization and ant colony optimization as three different paradigms that help in the design of optimal type-2 fuzzy controllers. We also mention alternative approaches to designing type-2 fuzzy controllers without optimization techniques. We also provide a comparison of the different optimization methods for the case of designing type-2 fuzzy controllers.
On interval fuzzy negations There exist infinitely many ways to extend the classical propositional connectives to the set [0,1], preserving their behaviors in the extremes 0 and 1 exactly as in the classical logic. However, it is a consensus that this issue is not sufficient, and, therefore, these extensions must also preserve some minimal logical properties of the classical connectives. The notions of t-norms, t-conorms, fuzzy negations and fuzzy implications taking these considerations into account. In previous works, the author, joint with other colleagues, generalizes these notions to the set U={[a,b]|0@?a@?b@?1}, providing canonical constructions to obtain, for example, interval t-norms that are the best interval representations of t-norms. In this paper, we consider the notion of interval fuzzy negation and generalize, in a natural way, several notions related with fuzzy negations, such as the ones of equilibrium point and negation-preserving automorphism. We show that the main properties of these notions are preserved in those generalizations.
Subsethood, entropy, and cardinality for interval-valued fuzzy sets---An algebraic derivation In this paper a unified formulation of subsethood, entropy, and cardinality for interval-valued fuzzy sets (IVFSs) is presented. An axiomatic skeleton for subsethood measures in the interval-valued fuzzy setting is proposed, in order for subsethood to reduce to an entropy measure. By exploiting the equivalence between the structures of IVFSs and Atanassov's intuitionistic fuzzy sets (A-IFSs), the notion of average possible cardinality is presented and its connection to least and biggest cardinalities, proposed in [E. Szmidt, J. Kacprzyk, Entropy for intuitionistic fuzzy sets, Fuzzy Sets and Systems 118 (2001) 467-477], is established both algebraically and geometrically. A relation with the cardinality of fuzzy sets (FSs) is also demonstrated. Moreover, the entropy-subsethood and interval-valued fuzzy entropy theorems are stated and algebraically proved, which generalize the work of Kosko [Fuzzy entropy and conditioning, Inform. Sci. 40(2) (1986) 165-174; Fuzziness vs. probability, International Journal of General Systems 17(2-3) (1990) 211-240; Neural Networks and Fuzzy Systems, Prentice-Hall International, Englewood Cliffs, NJ, 1992; Intuitionistic Fuzzy Sets: Theory and Applications, Vol. 35 of Studies in Fuzziness and Soft Computing, Physica-Verlag, Heidelberg, 1999] for FSs. Finally, connections of the proposed subsethood and entropy measures for IVFSs with corresponding definitions for FSs and A-IFSs are provided.
Design of interval type-2 fuzzy sliding-mode controller In this paper, an interval type-2 fuzzy sliding-mode controller (IT2FSMC) is proposed for linear and nonlinear systems. The proposed IT2FSMC is a combination of the interval type-2 fuzzy logic control (IT2FLC) and the sliding-mode control (SMC) which inherits the benefits of these two methods. The objective of the controller is to allow the system to move to the sliding surface and remain in on it so as to ensure the asymptotic stability of the closed-loop system. The Lyapunov stability method is adopted to verify the stability of the interval type-2 fuzzy sliding-mode controller system. The design procedure of the IT2FSMC is explored in detail. A typical second order linear interval system with 50% parameter variations, an inverted pendulum with variation of pole characteristics, and a Duffing forced oscillation with uncertainty and disturbance are adopted to illustrate the validity of the proposed method. The simulation results show that the IT2FSMC achieves the best tracking performance in comparison with the type-1 Fuzzy logic controller (T1FLC), the IT2FLC, and the type-1 fuzzy sliding-mode controller (T1FSMC).
The algebra of fuzzy truth values The purpose of this paper is to give a straightforward mathematical treatment of algebras of fuzzy truth values for type-2 fuzzy sets.
A 2-tuple fuzzy linguistic representation model for computing with words The fuzzy linguistic approach has been applied successfully to many problems. However, there is a limitation of this approach imposed by its information representation model and the computation methods used when fusion processes are performed on linguistic values. This limitation is the loss of information; this loss of information implies a lack of precision in the final results from the fusion of linguistic information. In this paper, we present tools for overcoming this limitation. The linguistic information is expressed by means of 2-tuples, which are composed of a linguistic term and a numeric value assessed in (-0.5, 0.5). This model allows a continuous representation of the linguistic information on its domain, therefore, it can represent any counting of information obtained in a aggregation process. We then develop a computational technique for computing with words without any loss of information. Finally, different classical aggregation operators are extended to deal with the 2-tuple linguistic model
Compressive wireless sensing General Terms Compressive Sampling is an emerging theory that is based on the fact that a relatively small number of random pro-jections of a signal can contain most of its salient informa-tion. In this paper, we introduce the concept of Compressive Wireless Sensing for sensor networks in which a fusion center retrieves signal field information from an ensemble of spa-tially distributed sensor nodes. Energy and bandwidth are scarce resources in sensor networks and the relevant metrics of interest in our context are 1) the latency involved in in-formation retrieval; and 2) the associated power-distortion trade-o. It is generally recognized that given su cient prior knowledge about the sensed data (e. g., statistical character-ization, homogeneity etc. ), there exist schemes that have very favorable power-distortion-latency trade-o s. We pro-pose a distributed matched source-channel communication scheme, based in part on recent results in compressive sam-pling theory, for estimation of sensed data at the fusion cen-ter and analyze, as a function of number of sensor nodes, the trade-o s between power, distortion and latency. Compres-sive wireless sensing is a universal scheme in the sense that it requires no prior knowledge about the sensed data. This universality, however, comes at the cost of optimality (in terms of a less favorable power-distortion-latency trade-o ) and we quantify this cost relative to the case when su cient prior information about the sensed data is assumed.
Informative Sensing Compressed sensing is a recent set of mathematical results showing that sparse signals can be exactly reconstructed from a small number of linear measurements. Interestingly, for ideal sparse signals with no measurement noise, random measurements allow perfect reconstruction while measurements based on principal component analysis (PCA) or independent component analysis (ICA) do not. At the same time, for other signal and noise distributions, PCA and ICA can significantly outperform random projections in terms of enabling reconstruction from a small number of measurements. In this paper we ask: given the distribution of signals we wish to measure, what are the optimal set of linear projections for compressed sensing? We consider the problem of finding a small number of l inear projections that are maximally informative about the signal. Formally, we use the InfoMax criterion and seek to maximize the mutual information between the signal, x, and the (possibly noisy) projection y = Wx. We show that in general the optimal projections are not the principal components of the data nor random projections, but rather a seemingly novel set of projections that capture what is still uncertain about the signal, given the knowledge of distribution. We present analytic solutions for certain special cases including natural images. In particular, for natural images, the near-optimal projec tions are bandwise random, i.e., incoherent to the sparse bases at a particular frequency band but with more weights on the low-frequencies, which has a physical relation to the multi-resolution representatio n of images.
The 2-tuple linguistic computational model. Advantages of its linguistic description, accuracy and consistency. The Fuzzy Linguistic Approach has been applied successfully to different areas. The use of linguistic information for modelling expert preferences implies the use of processes of Computing with Words. To accomplish these processes different approaches has been proposed in the literature: (i) Computational model based on the Extension Principle, (H) the symbolic one(also called ordinal approach), and (iii) the 2-tuple linguistic computational model. The main problem of the classical approaches, (i) and (ii), is the loss of information and lack of precision during the computational processes. In this paper, we want to compare the linguistic description, accuracy and consistency of the results obtained using each model over the rest ones. To do so, we shall solve a Multiexpert Multicriteria Decision-Making problem defined in a multigranularity linguistic context using the different computational approaches. This comparison helps us to decide what model is more adequated for computing with words.
A general framework for accurate statistical timing analysis considering correlations The impact of parameter variations on timing due to process and environmental variations has become significant in recent years. With each new technology node this variability is becoming more prominent. In this work, we present a general statistical timing analysis (STA) framework that captures spatial correlations between gate delays. The technique presented does not make any assumption about the distributions of the parameter variations, gate delay and arrival times. The authors proposed a Taylor-series expansion based polynomial representation of gate delays and arrival times which is able to effectively capture the non-linear dependencies that arise due to increasing parameter variations. In order to reduce the computational complexity introduced due to polynomial modeling during STA, an efficient linear-modeling driven polynomial STA scheme was proposed. On an average the degree-2 polynomial scheme had a 7.3 × speedup as compared to Monte Carlo with 0.049 units of rms error with respect to Monte Carlo. The technique is generic and could be applied to arbitrary variations in the underlying parameters.
Selecting the advanced manufacturing technology using fuzzy multiple attributes group decision making with multiple fuzzy information Selection of advanced manufacturing technology in manufacturing system management is very important to determining manufacturing system competitiveness. This research develops a fuzzy multiple attribute decision-making applied in the group decision-making to improving advanced manufacturing technology selection process. Since numerous attributes have been considered in evaluating the manufacturing technology suitability, most information available in this stage is subjective, imprecise and vague, fuzzy sets theory provides a mathematical framework for modeling imprecision and vagueness. In the proposed approach, a new fusion method of fuzzy information is developed to managing information assessed in different linguistic scales (multi-granularity linguistic term sets) and numerical scales. The flexible manufacturing system adopted in the Taiwanese bicycle industry is employed in this study to demonstrate the computational process of the proposed method. Finally, sensitivity analysis can be performed to examine that the solution robustness.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.24
0.017143
0.013333
0.005
0.002729
0.000278
0.000082
0.000015
0
0
0
0
0
0
Directional relative position between objects in image processing: a comparison between fuzzy approaches The importance of describing relationships between objects has been highlighted in works in very different areas, including image understanding. Among these relationships, directional relative position relations are important since they provide an important information about the spatial arrangement of objects in the scene. Such concepts are rather ambiguous, they defy precise definitions, but human beings have a rather intuitive and common way of understanding and interpreting them. Therefore in this context, fuzzy methods are appropriate to provide consistent definitions that integrate both quantitative and qualitative knowledge, thus providing a computational representation and interpretation of imprecise spatial relations, expressed in a linguistic way, and including quantitative knowledge. Several fuzzy approaches have been developed in the literature, and the aim of this paper is to review and compare them according to their properties and according to the types of questions they seek to answer.
A constraint propagation approach to structural model based image segmentation and recognition The interpretation of complex scenes in images requires knowledge regarding the objects in the scene and their spatial arrangement. We propose a method for simultaneously segmenting and recognizing objects in images, that is based on a structural representation of the scene and a constraint propagation method. The structural model is a graph representing the objects in the scene, their appearance and their spatial relations, represented by fuzzy models. The proposed solver is a novel global method that assigns spatial regions to the objects according to the relations in the structural model. We propose to progressively reduce the solution domain by excluding assignments that are inconsistent with a constraint network derived from the structural model. The final segmentation of each object is then performed as a minimal surface extraction. The contributions of this paper are illustrated through the example of brain structure recognition in magnetic resonance images.
Fuzzy spatial constraints and ranked partitioned sampling approach for multiple object tracking. While particle filters are now widely used for object tracking in videos, the case of multiple object tracking still raises a number of issues. Among them, a first, and very important, problem concerns the exponential increase of the number of particles with the number of objects to be tracked, that can make some practical applications intractable. To achieve good tracking performances, we propose to use a Partitioned Sampling method in the estimation process with an additional feature about the ordering sequence in which the objects are processed. We call it Ranked Partitioned Sampling, where the optimal order in which objects should be processed and tracked is estimated jointly with the object state. Another essential point concerns the modeling of possible interactions between objects. As another contribution, we propose to represent these interactions within a formal framework relying on fuzzy sets theory. This allows us to easily model spatial constraints between objects, in a general and formal way. The association of these two contributions was tested on typical videos exhibiting difficult situations such as partial or total occlusions, and appearance or disappearance of objects. We show the benefit of using conjointly these two contributions, in comparison to classical approaches, through multiple object tracking and articulated object tracking experiments on real video sequences. The results show that our approach provides less tracking errors than those obtained with the classical Partitioned Sampling method, without the need for increasing the number of particles.
Multidimensional scaling of fuzzy dissimilarity data Multidimensional scaling is a well-known technique for representing measurements of dissimilarity among objects as distances between points in a p-dimensional space. In this paper, this method is extended to the case where dissimilarities are expressed as intervals or fuzzy numbers. Each object is then no longer represented by a point but by a crisp or a fuzzy region. To determine these regions, two algorithms are proposed and illustrated using typical datasets. Experiments demonstrate the ability of the methods to represent both the structure and the vagueness of dissimilarity measurements.
Dilation and Erosion of Spatial Bipolar Fuzzy Sets Bipolarity has not been much exploited in the spatial domain yet, although it has many features to manage imprecise and incomplete information that could be interesting in this domain. This paper is a first step to address this issue, and we propose to define mathematical morphology operations on bipolar fuzzy sets (or equivalently interval valued fuzzy sets or intuitionistic fuzzy sets).
Introducing fuzzy spatial constraints in a ranked partitioned sampling for multi-object tracking Dealing with multi-object tracking in a particle filter raises several issues. A first essential point is to model possible interactions between objects. In this article, we represent these interactions using a fuzzy formalism, which allows us to easily model spatial constraints between objects, in a general and formal way. The second issue addressed in this work concerns the practical application of a multi-object tracking with a particle filter. To avoid a decrease of performances, a partitioned sampling method can be employed. However, to achieve good tracking performances, the estimation process requires to know the ordering sequence in which the objects are treated. This problem is solved by introducing, as a second contribution, a ranked partitioned sampling, which aims at estimating both the ordering sequence and the joint state of the objects. Finally, we show the benefit of our two contributions in comparison to classical approaches through two multi-object tracking experiments and the tracking of an articulated object.
Fuzzy Sets
Linguistic Decision-Making Models Using linguistic values to assess results and information about external factors is quite usual in real decision situations. In this article we present a general model for such problems. Utilities are evaluated in a term set of labels and the information is supposed to be a linguistic evidence, that is, is to be represented by a basic assignment of probability (in the sense of Dempster-Shafer) but taking its values on a term set of linguistic likelihoods. Basic decision rules, based on fuzzy risk intervals, are developed and illustrated by several examples. The last section is devoted to analyzing the suitability of considering a hierarchical structure (represented by a tree) for the set of utility labels.
A neural fuzzy system with linguistic teaching signals A neural fuzzy system learning with linguistic teaching signals is proposed. This system is able to process and learn numerical information as well as linguistic information. It can be used either as an adaptive fuzzy expert system or as an adaptive fuzzy controller. First, we propose a five-layered neural network for the connectionist realization of a fuzzy inference system. The connectionist structure can house fuzzy logic rules and membership functions for fuzzy inference. We use α-level sets of fuzzy numbers to represent linguistic information. The inputs, outputs, and weights of the proposed network can be fuzzy numbers of any shape. Furthermore, they can be hybrid of fuzzy numbers and numerical numbers through the use of fuzzy singletons. Based on interval arithmetics, two kinds of learning schemes are developed for the proposed system: fuzzy supervised learning and fuzzy reinforcement learning. Simulation results are presented to illustrate the performance and applicability of the proposed system
Approximate Volume and Integration for Basic Semialgebraic Sets Given a basic compact semialgebraic set $\mathbf{K}\subset\mathbb{R}^n$, we introduce a methodology that generates a sequence converging to the volume of $\mathbf{K}$. This sequence is obtained from optimal values of a hierarchy of either semidefinite or linear programs. Not only the volume but also every finite vector of moments of the probability measure that is uniformly distributed on $\mathbf{K}$ can be approximated as closely as desired, which permits the approximation of the integral on $\mathbf{K}$ of any given polynomial; the extension to integration against some weight functions is also provided. Finally, some numerical issues associated with the algorithms involved are briefly discussed.
Neoclassical analysis: fuzzy continuity and convergence The neoclassical analysis is a field in which fuzzy continuous functions are investigated. For this purpose, new measures of continuity and discontinuity (or defects of continuity) are introduced and studied. Based on such measures, classes of fuzzy continuous functions are defined and their properties are obtained. The class of fuzzy continuous functions may be considered as a fuzzy set of continuous functions. Its support consists of all functions on some topological space X into a metric space Y and its membership function is the corresponding continuity measure. Such an expansion provides a possibility to complete some important classical results. Connections between boundedness and fuzzy continuity are investigated. Criteria of boundedness and local boundedness are obtained for functions on Eucleadean spaces. Besides, such new concepts as fuzzy convergence and fuzzy uniform convergence are introduced and investigated. Their properties and connections with fuzzy continuous functions are explicated. Some results, which are obtained here, are similar to the results of classical mathematical analysis, while others differ essentially from those that are proved in classical mathematics. In the first case classical results are consequences of the corresponding results of neoclassical analysis.
Decider: A fuzzy multi-criteria group decision support system Multi-criteria group decision making (MCGDM) aims to support preference-based decision over the available alternatives that are characterized by multiple criteria in a group. To increase the level of overall satisfaction for the final decision across the group and deal with uncertainty in decision process, a fuzzy MCGDM process (FMP) model is established in this study. This FMP model can also aggregate both subjective and objective information under multi-level hierarchies of criteria and evaluators. Based on the FMP model, a fuzzy MCGDM decision support system (called Decider) is developed, which can handle information expressed in linguistic terms, boolean values, as well as numeric values to assess and rank a set of alternatives within a group of decision makers. Real applications indicate that the presented FMP model and the Decider software are able to effectively handle fuzziness in both subjective and objective information and support group decision-making under multi-level criteria with a higher level of satisfaction by decision makers.
Wavelet-domain compressive signal reconstruction using a Hidden Markov Tree model Compressive sensing aims to recover a sparse or compressible signal from a small set of projections onto random vectors; conventional so- lutions involve linear programming or greedy algorithms that can be computationally expensive. Moreover, these recovery techniques are generic and assume no particular structure in the signal asi de from sparsity. In this paper, we propose a new algorithm that enables fast recovery of piecewise smooth signals, a large and useful class of signals whose sparse wavelet expansions feature a distinct "con- nected tree" structure. Our algorithm fuses recent results on iterative reweighted ℓ1-norm minimization with the wavelet Hidden Markov Tree model. The resulting optimization-based solver outperforms the standard compressive recovery algorithms as well as previously proposed wavelet-based recovery algorithms. As a bonus, the al- gorithm reduces the number of measurements necessary to achieve low-distortion reconstruction.
The laws of large numbers for fuzzy random variables The new attempt of weak and strong law of large numbers for fuzzy random variables is discussed in this paper by proposing the convergence in probability and convergence with probability one for fuzzy random variables. We first consider the limit properties of fuzzy numbers by invoking the Hausdorff metric, and then we extend it to the convergence in probability and convergence with probability one for fuzzy random variables. We provide the notion of weak and strong convergence in probability and weak and strong convergence with probability one for fuzzy random variables. Finally we come up with the weak and strong law of large numbers for fuzzy random variables in weak and strong sense. (C) 2000 Elsevier Science B.V. All rights reserved.
1.101937
0.1
0.1
0.034625
0.017298
0.001818
0.000042
0.000001
0
0
0
0
0
0
REscope: High-dimensional Statistical Circuit Simulation towards Full Failure Region Coverage Statistical circuit simulation is exhibiting increasing importance for circuit design under process variations. Existing approaches cannot efficiently analyze the failure probability for circuits with a large number of variation, nor handle problems with multiple disjoint failure regions. The proposed rare event microscope (REscope) first reduces the problem dimension by pruning the parameters with little contribution to circuit failure. Furthermore, we applied a nonlinear classifier which is capable of identifying multiple disjoint failure regions. In REscope, only likely-to-fail samples are simulated then matched to a generalized pareto distribution. On a 108-dimension charge pump circuit in PLL design, REscope outperforms the importance sampling and achieves more than 2 orders of magnitude speedup compared to Monte Carlo. Moreover, it accurately estimates failure rate, while the importance sampling totally fails because failure regions are not correctly captured.
A Fast Non-Monte-Carlo Yield Analysis and Optimization by Stochastic Orthogonal Polynomials Performance failure has become a significant threat to the reliability and robustness of analog circuits. In this article, we first develop an efficient non-Monte-Carlo (NMC) transient mismatch analysis, where transient response is represented by stochastic orthogonal polynomial (SOP) expansion under PVT variations and probabilistic distribution of transient response is solved. We further define performance yield and derive stochastic sensitivity for yield within the framework of SOP, and finally develop a gradient-based multiobjective optimization to improve yield while satisfying other performance constraints. Extensive experiments show that compared to Monte Carlo-based yield estimation, our NMC method achieves up to 700X speedup and maintains 98&percnt; accuracy. Furthermore, multiobjective optimization not only improves yield by up to 95.3&percnt; with performance constraints, it also provides better efficiency than other existing methods.
Statistical design and optimization of SRAM cell for yield enhancement We have analyzed and modeled the failure probabilities of SRAM cells due to process parameter variations. A method to predict the yield of a memory chip based on the cell failure probability is proposed. The developed method is used in an early stage of a design cycle to minimize memory failure probability by statistically sizing of SRAM cell.
Remark on algorithm 659: Implementing Sobol's quasirandom sequence generator An algorithm to generate Sobol' sequences to approximate integrals in up to 40 dimensions has been previously given by Bratley and Fox in Algorithm 659. Here, we provide more primitive polynomials and "direction numbers" so as to allow the generation of Sobol' sequences to approximate integrals in up to 1111 dimensions. The direction numbers given generate Sobol' sequences that satisfy Sobol's so-called Property A.
Statistical blockade: very fast statistical simulation and modeling of rare circuit events and its application to memory design Circuit reliability under random parametric variation is an area of growing concern. For highly replicated circuits, e.g., static random access memories (SRAMs), a rare statistical event for one circuit may induce a not-so-rare system failure. Existing techniques perform poorly when tasked to generate both efficient sampling and sound statistics for these rare events. Statistical blockade is a novel Monte Carlo technique that allows us to efficiently filter--to block--unwanted samples that are insufficiently rare in the tail distributions we seek. The method synthesizes ideas from data mining and extreme value theory and, for the challenging application of SRAM yield analysis, shows speedups of 10-100 times over standard Monte Carlo.
First-order incremental block-based statistical timing analysis Variability in digital integrated circuits makes timing verification an extremely challenging task. In this paper, a canonical first order delay model is proposed that takes into account both correlated and independent randomness. A novel linear-time block-based statistical timing algorithm is employed to propagate timing quantities like arrival times and required arrival times through the timing graph in this canonical form. At the end of the statistical timing, the sensitivities of all timing quantities to each of the sources of variation are available. Excessive sensitivities can then be targeted by manual or automatic optimization methods to improve the robustness of the design. This paper also reports the first incremental statistical timer in the literature which is suitable for use in the inner loop of physical synthesis or other optimization programs. The third novel contribution of this paper is the computation of local and global criticality probabilities. For a very small cost in CPU time, the probability of each edge or node of the timing graph being critical is computed. Numerical results are presented on industrial ASIC chips with over two million logic gates.
Estimators and tail bounds for dimension reduction in lα (0 < α ≤ 2) using stable random projections. The method of stable random projections is popular in data stream computations, data mining, information retrieval, and machine learning, for efficiently computing the lα (0 < α ≤ 2) distances using a small (memory) space, in one pass of the data. We propose algorithms based on (1) the geometric mean estimator, for all 0 <α ≤ 2, and (2) the harmonic mean estimator, only for small α (e.g., α < 0.344). Compared with the previous classical work [27], our main contributions include: • The general sample complexity bound for α ≠ 1,2. For α = 1, [27] provided a nice argument based on the inverse of Cauchy density about the median, leading to a sample complexity bound, although they did not provide the constants and their proof restricted ε to be "small enough." For general α ≠ 1, 2, however, the task becomes much more difficult. [27] provided the "conceptual promise" that the sample complexity bound similar to that for α = 1 should exist for general α, if a "non-uniform algorithm based on t-quantile" could be implemented. Such a conceptual algorithm was only for supporting the arguments in [27], not a real implementation. We consider this is one of the main problems left open in [27]. In this study, we propose a practical algorithm based on the geometric mean estimator and derive the sample complexity bound for all 0 < α ≤ 2. • The practical and optimal algorithm for α = 0+ The l0 norm is an important case. Stable random projections can provide an approximation to the l0 norm using α → 0+. We provide an algorithm based on the harmonic mean estimator, which is simple and statistically optimal. Its tail bounds are sharper than the bounds derived based on the geometric mean. We also discover a (possibly surprising) fact: in boolean data, stable random projections using α = 0+ with the harmonic mean estimator will be about twice as accurate as (l2) normal random projections. Because high-dimensional boolean data are common, we expect this fact will be practically quite useful. • The precise theoretical analysis and practical implications We provide the precise constants in the tail bounds for both the geometric mean and harmonic mean estimators. We also provide the variances (either exact or asymptotic) for the proposed estimators. These results can assist practitioners to choose sample sizes accurately.
Scale-Space and Edge Detection Using Anisotropic Diffusion A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the 'no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image.
On the quasi-Monte Carlo method with Halton points for elliptic PDEs with log-normal diffusion. This article is dedicated to the computation of the moments of the solution to elliptic partial differential equations with random, log-normally distributed diffusion coefficients by the quasi-Monte Carlo method. Our main result is that the convergence rate of the quasi-Monte Carlo method based on the Halton sequence for the moment computation depends only linearly on the dimensionality of the stochastic input parameters. In particular, we attain this rather mild dependence on the stochastic dimensionality without any randomization of the quasi-Monte Carlo method under consideration. For the proof of the main result, we require related regularity estimates for the solution and its powers. These estimates are also provided here. Numerical experiments are given to validate the theoretical findings.
Recognition of shapes by attributed skeletal graphs In this paper, we propose a framework to address the problem of generic 2-D shape recognition. The aim is mainly on using the potential strength of skeleton of discrete objects in computer vision and pattern recognition where features of objects are needed for classification. We propose to represent the medial axis characteristic points as an attributed skeletal graph to model the shape. The information about the object shape and its topology is totally embedded in them and this allows the comparison of different objects by graph matching algorithms. The experimental results demonstrate the correctness in detecting its characteristic points and in computing a more regular and effective representation for a perceptual indexing. The matching process, based on a revised graduated assignment algorithm, has produced encouraging results, showing the potential of the developed method in a variety of computer vision and pattern recognition domains. The results demonstrate its robustness in the presence of scale, reflection and rotation transformations and prove the ability to handle noise and occlusions.
Compressive sampling for streaming signals with sparse frequency content Compressive sampling (CS) has emerged as significant signal processing framework to acquire and reconstruct sparse signals at rates significantly below the Nyquist rate. However, most of the CS development to-date has focused on finite-length signals and representations. In this paper we discuss a streaming CS framework and greedy reconstruction algorithm, the Stream- ing Greedy Pursuit (SGP), to reconstruct signals with sparse frequency content. Our proposed sampling framework and the SGP are explicitly intended for streaming applications and signals of unknown length. The measurement framework we propose is designed to be causal and im- plementable using existing hardware architectures. Furthermore, our reconstruction algorithm provides specific computational guarantees, which makes it appropriate for real-time system im- plementations. Our experiment results on very long signals demonstrate the good performance of the SGP and validate our approach.
QoE Aware Service Delivery in Distributed Environment Service delivery and customer satisfaction are strongly related items for a correct commercial management platform. Technical aspects targeting this issue relate to QoS parameters that can be handled by the platform, at least partially. Subjective psychological issues and human cognitive aspects are typically unconsidered aspects and they directly determine the Quality of Experience (QoE). These factors finally have to be considered as key input for a successful business operation between a customer and a company. In our work, a multi-disciplinary approach is taken to propose a QoE interaction model based on the theoretical results from various fields including pyschology, cognitive sciences, sociology, service ecosystem and information technology. In this paper a QoE evaluator is described for assessing the service delivery in a distributed and integrated environment on per user and per service basis.
Fuzzy modeling of system behavior for risk and reliability analysis The main objective of the article is to permit the reliability analyst's/engineers/managers/practitioners to analyze the failure behavior of a system in a more consistent and logical manner. To this effect, the authors propose a methodological and structured framework, which makes use of both qualitative and quantitative techniques for risk and reliability analysis of the system. The framework has been applied to model and analyze a complex industrial system from a paper mill. In the quantitative framework, after developing the Petrinet model of the system, the fuzzy synthesis of failure and repair data (using fuzzy arithmetic operations) has been done. Various system parameters of managerial importance such as repair time, failure rate, mean time between failures, availability, and expected number of failures are computed to quantify the behavior in terms of fuzzy, crisp and defuzzified values. Further, to improve upon the reliability and maintainability characteristics of the system, in depth qualitative analysis of systems is carried out using failure mode and effect analysis (FMEA) by listing out all possible failure modes, their causes and effect on system performance. To address the limitations of traditional FMEA method based on risky priority number score, a risk ranking approach based on fuzzy and Grey relational analysis is proposed to prioritize failure causes.
Subjective Quality Metric For 3d Video Services Three-dimensional (3D) video service is expected to be introduced as a next-generation television service. Stereoscopic video is composed of two 2D video signals for the left and right views, and these 2D video signals are encoded. Video quality between the left and right views is not always consistent because, for example, each view is encoded at a different bit rate. As a result, the video quality difference between the left and right views degrades the quality of stereoscopic video. However, these characteristics have not been thoroughly studied or modeled. Therefore, it is necessary to better understand how the video quality difference affects stereoscopic video quality and to model the video quality characteristics. To do that, we conducted subjective quality assessments to derive subjective video quality characteristics. The characteristics showed that 3D video quality was affected by the difference in video quality between the left and right views, and that when the difference was small, 3D video quality correlated with the highest 2D video quality of the two views. We modeled these characteristics as a subjective quality metric using a training data set. Finally, we verified the performance of our proposed model by applying it to unknown data sets.
1.04
0.04
0.02
0.007273
0.002597
0
0
0
0
0
0
0
0
0
Trapezoidal interval type-2 fuzzy aggregation operators and their application to multiple attribute group decision making. A type-2 fuzzy set, which is characterized by a fuzzy membership function, involves more uncertainties than the type-1 fuzzy set. As the most widely used type-2 fuzzy set, interval type-2 fuzzy set is a very useful tool to model the uncertainty in the process of decision making. As a special case of interval type-2 fuzzy set, trapezoidal interval type-2 fuzzy set can express linguistic assessments by transforming them into numerical variables objectively. The aim of this paper is to investigate the multiple attribute group decision-making problems in which the attribute values and the weights take the form of trapezoidal interval type-2 fuzzy sets. First, we introduce the concept of trapezoidal interval type-2 fuzzy sets and some arithmetic operations between them. Then, we develop several trapezoidal interval type-2 fuzzy aggregation operators for aggregating trapezoidal interval type-2 fuzzy sets and examine several useful properties of the developed operators. Furthermore, based on the proposed operators, we develop two approaches to multiple attribute group decision making with linguistic information. Finally, a practical example is given to illustrate the feasibility and effectiveness of the developed approach.
Optimization of interval type-2 fuzzy systems for image edge detection. •The optimization of the antecedent parameters for a type 2 fuzzy system of edge detection is presented.•The goal of interval type-2 fuzzy logic in edge detection methods is to provide the ability to handle uncertainty.•Results show that the Cuckoo search provides better results in optimizing the type-2 fuzzy system.
Mean and CV reduction methods on Gaussian type-2 fuzzy set and its application to a multilevel profit transportation problem in a two-stage supply chain network. The transportation problem (TP) is an important supply chain optimization problem in the traffic engineering. This paper maximizes the total profit over a three-tiered distribution system consisting of plants, distribution centers (DCs) and customers. Plants produce multiple products that are shipped to DCs. If a DC is used, then a fixed cost (FC) is charged. The customers are supplied by a single DC. To characterize the uncertainty in the practical decision environment, this paper considers the unit cost of TP, FC, the supply capacities and demands as Gaussian type-2 fuzzy variables. To give a modeling framework for optimization problems with multifold uncertainty, different reduction methods were proposed to transform a Gaussian type-2 fuzzy variable into a type-1 fuzzy variable by mean reduction method and CV reduction method. Then, the TP was reformulated as a chance-constrained programming model enlightened by the credibility optimization methods. The deterministic models are then solved using two different soft computing techniques—generalized reduced gradient and modified particle swarm optimization, where the position of each particle is adjusted according to its own experience and that of its neighbors. The numerical experiments illustrated the application and effectiveness of the proposed approaches.
A new multi-criteria weighting and ranking model for group decision-making analysis based on interval-valued hesitant fuzzy sets to selection problems The multi-criteria group decision-making methods under fuzzy environments are developed to cope with imprecise and uncertain information for solving the complex group decision-making problems. A team of some professional experts for the assessment is established to judge candidates or alternatives among the chosen evaluation criteria. In this paper, a novel multi-criteria weighting and ranking model is introduced with interval-valued hesitant fuzzy setting, namely IVHF-MCWR, based on the group decision analysis. The interval-valued hesitant fuzzy set theory is a powerful tool to deal with uncertainty by considering some interval-values for an alternative under a set regarding assessment factors. In procedure of the proposed IVHF-MCWR model, weights of criteria as well as experts are considered to decrease the errors. In this regard, optimal criteria' weights are computed by utilizing an extended maximizing deviation method based on IVHF-Hamming distance measure. In addition, experts' judgments are taken into account for computing the criteria' weights. Also, experts' weights are determined based on proposed new IVHF technique for order performance by similarity to ideal solution method. Then, a new IVHF-index based on Hamming distance measure is introduced to compute the relative closeness coefficient for ranking the candidates or alternatives. Finally, two application examples about the location and supplier selection problems are considered to indicate the capability of the proposed IVHF-MCWR model. In addition, comparative analysis is reported to compare the proposed model and three fuzzy decision methods from the recent literature. Comparing these approaches and computational results shows that the IVHF-MCWR model works properly under uncertain conditions.
Multi-criteria decision making method based on possibility degree of interval type-2 fuzzy number This paper proposes a new approach based on possibility degree to solve multi-criteria decision making (MCDM) problems in which the criteria value takes the form of interval type-2 fuzzy number. First, a new expected value function is defined and an optimal model based on maximizing deviation method is constructed to obtain weight coefficients when criteria weight information is partially known. Then, the overall value of each alternative is calculated by the defined aggregation operators. Furthermore, a new possibility degree, which is proposed to overcome some drawbacks of the existing methods, is introduced for comparisons between the overall values of alternatives to construct a possibility degree matrix. Based on the constructed matrix, all of the alternatives are ranked according to the ranking vector derived from the matrix, and the best one is selected. Finally, the proposed method is applied to a case study on the overseas minerals investment for one of the largest multi-species nonferrous metals companies in China and the results demonstrate the feasibility of the method.
First-order incremental block-based statistical timing analysis Variability in digital integrated circuits makes timing verification an extremely challenging task. In this paper, a canonical first order delay model is proposed that takes into account both correlated and independent randomness. A novel linear-time block-based statistical timing algorithm is employed to propagate timing quantities like arrival times and required arrival times through the timing graph in this canonical form. At the end of the statistical timing, the sensitivities of all timing quantities to each of the sources of variation are available. Excessive sensitivities can then be targeted by manual or automatic optimization methods to improve the robustness of the design. This paper also reports the first incremental statistical timer in the literature which is suitable for use in the inner loop of physical synthesis or other optimization programs. The third novel contribution of this paper is the computation of local and global criticality probabilities. For a very small cost in CPU time, the probability of each edge or node of the timing graph being critical is computed. Numerical results are presented on industrial ASIC chips with over two million logic gates.
Some Defects in Finite-Difference Edge Finders This work illustrates and explains various artifacts in the output of five finite difference edge finders, those of J.F. Canny (1983, 1986), R.A. Boie et al. (1986) and R.A. Boie and I.J. Cox (1987), and three variations on that of D. Marr and E.C. Hildreth (1980), reimplemented with a common output format and method of noise suppression. These artifacts include gaps in boundaries, spurious boundaries, and deformation of region shape.
A Tutorial on Support Vector Machines for Pattern Recognition The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
Reconstruction of a low-rank matrix in the presence of Gaussian noise. This paper addresses the problem of reconstructing a low-rank signal matrix observed with additive Gaussian noise. We first establish that, under mild assumptions, one can restrict attention to orthogonally equivariant reconstruction methods, which act only on the singular values of the observed matrix and do not affect its singular vectors. Using recent results in random matrix theory, we then propose a new reconstruction method that aims to reverse the effect of the noise on the singular value decomposition of the signal matrix. In conjunction with the proposed reconstruction method we also introduce a Kolmogorov–Smirnov based estimator of the noise variance.
Proceedings of the 41th Design Automation Conference, DAC 2004, San Diego, CA, USA, June 7-11, 2004
Compressive sampling for streaming signals with sparse frequency content Compressive sampling (CS) has emerged as significant signal processing framework to acquire and reconstruct sparse signals at rates significantly below the Nyquist rate. However, most of the CS development to-date has focused on finite-length signals and representations. In this paper we discuss a streaming CS framework and greedy reconstruction algorithm, the Stream- ing Greedy Pursuit (SGP), to reconstruct signals with sparse frequency content. Our proposed sampling framework and the SGP are explicitly intended for streaming applications and signals of unknown length. The measurement framework we propose is designed to be causal and im- plementable using existing hardware architectures. Furthermore, our reconstruction algorithm provides specific computational guarantees, which makes it appropriate for real-time system im- plementations. Our experiment results on very long signals demonstrate the good performance of the SGP and validate our approach.
User impatience and network performance In this work, we analyze from passive measurements the correlations between the user-induced interruptions of TCP connections and different end-to-end performance metrics. The aim of this study is to assess the possibility for a network operator to take into account the customers' experience for network monitoring. We first observe that the usual connection-level performance metrics of the interrupted connections are not very different, and sometimes better than those of normal connections. However, the request-level performance metrics show stronger correlations between the interruption rates and the network quality-of-service. Furthermore, we show that the user impatience could also be used to characterize the relative sensitivity of data applications to various network performance metrics.
Fuzzy Power Command Enhancement in Mobile Communications Systems
On Fuzziness, Its Homeland and Its Neighbour
1.2
0.2
0.2
0.1
0.013333
0
0
0
0
0
0
0
0
0
Scaling multiple-source entity resolution using statistically efficient transfer learning We consider a serious, previously-unexplored challenge facing almost all approaches to scaling up entity resolution (ER) to multiple data sources: the prohibitive cost of labeling training data for supervised learning of similarity scores for each pair of sources. While there exists a rich literature describing almost all aspects of pairwise ER, this new challenge is arising now due to the unprecedented ability to acquire and store data from online sources, interest in features driven by ER such as enriched search verticals, and the uniqueness of noisy and missing data characteristics for each source. We show on real-world and synthetic data that for state-of-the-art techniques, the reality of heterogeneous sources means that the number of labeled training data must scale quadratically in the number of sources, just to maintain constant precision/recall. We address this challenge with a brand new transfer learning algorithm which requires far less training data (or equivalently, achieves superior accuracy with the same data) and is trained using fast convex optimization. The intuition behind our approach is to adaptively share structure learned about one scoring problem with all other scoring problems sharing a data source in common. We demonstrate that our theoretically-motivated approach improves upon existing techniques for multi-source ER.
Decoding by linear programming This paper considers a natural error correcting problem with real valued input/output. We wish to recover an input vector f∈Rn from corrupted measurements y=Af+e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the ℓ1-minimization problem (||x||ℓ1:=Σi|xi|) min(g∈Rn) ||y - Ag||ℓ1 provided that the support of the vector of errors is not too large, ||e||ℓ0:=|{i:ei ≠ 0}|≤ρ·m for some ρ0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work. Finally, underlying the success of ℓ1 is a crucial property we call the uniform uncertainty principle that we shall describe in detail.
Compressed Sensing. Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0<ples1. The N most important coefficients in that expansion allow reconstruction with lscr2 error O(N1/2-1p/). It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients. Moreover, a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing. The nonadaptive measurements have the character of "random" linear combinations of basis/frame elements. Our results use the notions of optimal recovery, of n-widths, and information-based complexity. We estimate the Gel'fand n-widths of lscrp balls in high-dimensional Euclidean space in the case 0<ples1, and give a criterion identifying near- optimal subspaces for Gel'fand n-widths. We show that "most" subspaces are near-optimal, and show that convex optimization (Basis Pursuit) is a near-optimal way to extract information derived from these near-optimal subspaces
On the power of adaption Optimal error bounds for adaptive and nonadaptive numerical methods are compared. Since the class of adaptive methods is much larger, a well-chosen adaptive method might seem to be better than any nonadaptive method. Nevertheless there are several results saying that under natural assumptions adaptive methods are not better than nonadaptive ones. There are also other results, however, saying that adaptive methods can be significantly better than nonadaptive ones as well as bounds on how much better they can be. It turns out that the answer to the “adaption problem” depends very much on what is known a priori about the problem in question; even a seemingly small change of the assumptions can lead to a different answer.
Uncertainty principles and ideal atomic decomposition Suppose a discrete-time signal S(t), 0&les;t<N, is a superposition of atoms taken from a combined time-frequency dictionary made of spike sequences 1{t=τ} and sinusoids exp{2πiwt/N}/√N. Can one recover, from knowledge of S alone, the precise collection of atoms going to make up S? Because every discrete-time signal can be represented as a superposition of spikes alone, or as a superposition of sinusoids alone, there is no unique way of writing S as a sum of spikes and sinusoids in general. We prove that if S is representable as a highly sparse superposition of atoms from this time-frequency dictionary, then there is only one such highly sparse representation of S, and it can be obtained by solving the convex optimization problem of minimizing the l1 norm of the coefficients among all decompositions. Here “highly sparse” means that Nt+Nw<√N/2 where Nt is the number of time atoms, Nw is the number of frequency atoms, and N is the length of the discrete-time signal. Underlying this result is a general l1 uncertainty principle which says that if two bases are mutually incoherent, no nonzero signal can have a sparse representation in both bases simultaneously. For the above setting, the bases are sinusoids and spikes, and mutual incoherence is measured in terms of the largest inner product between different basis elements. The uncertainty principle holds for a variety of interesting basis pairs, not just sinusoids and spikes. The results have idealized applications to band-limited approximation with gross errors, to error-correcting encryption, and to separation of uncoordinated sources. Related phenomena hold for functions of a real variable, with basis pairs such as sinusoids and wavelets, and for functions of two variables, with basis pairs such as wavelets and ridgelets. In these settings, if a function f is representable by a sufficiently sparse superposition of terms taken from both bases, then there is only one such sparse representation; it may be obtained by minimum l1 norm atomic decomposition. The condition “sufficiently sparse” becomes a multiscale condition; for example, that the number of wavelets at level j plus the number of sinusoids in the jth dyadic frequency band are together less than a constant times 2j/2
Explicit cost bounds of algorithms for multivariate tensor product problems We study multivariate tensor product problems in the worst case and average casesettings. They are defined on functions of d variables. For arbitrary d, we provideexplicit upper bounds on the costs of algorithms which compute an &quot;-approximationto the solution. The cost bounds are of the form(c(d) + 2) fi 1`fi 2 + fi 3ln 1=&quot;d \Gamma 1" fi 4 (d\Gamma1) `1&quot;" fi 5:Here c(d) is the cost of one function evaluation (or one linear functional evaluation),and fi i "s do not...
The variety generated by the truth value algebra of type-2 fuzzy sets This paper addresses some questions about the variety generated by the algebra of truth values of type-2 fuzzy sets. Its principal result is that this variety is generated by a finite algebra, and in particular is locally finite. This provides an algorithm for determining when an equation holds in this variety. It also sheds light on the question of determining an equational axiomatization of this variety, although this problem remains open.
MIMO technologies in 3GPP LTE and LTE-advanced 3rd Generation Partnership Project (3GPP) has recently completed the specification of the Long Term Evolution (LTE) standard. Majority of the world's operators and vendors are already committed to LTE deployments and developments, making LTE the market leader in the upcoming evolution to 4G wireless communication systems. Multiple input multiple output (MIMO) technologies introduced in LTE such as spatial multiplexing, transmit diversity, and beamforming are key components for providing higher peak rate at a better system efficiency, which are essential for supporting future broadband data service over wireless links. Further extension of LTE MIMO technologies is being studied under the 3GPP study item "LTE-Advanced" to meet the requirement of IMT-Advanced set by International Telecommunication Union Radiocommunication Sector (ITU-R). In this paper, we introduce various MIMO technologies employed in LTE and provide a brief overview on the MIMO technologies currently discussed in the LTE-Advanced forum.
Galerkin Finite Element Approximations of Stochastic Elliptic Partial Differential Equations We describe and analyze two numerical methods for a linear elliptic problem with stochastic coefficients and homogeneous Dirichlet boundary conditions. Here the aim of the computations is to approximate statistical moments of the solution, and, in particular, we give a priori error estimates for the computation of the expected value of the solution. The first method generates independent identically distributed approximations of the solution by sampling the coefficients of the equation and using a standard Galerkin finite element variational formulation. The Monte Carlo method then uses these approximations to compute corresponding sample averages. The second method is based on a finite dimensional approximation of the stochastic coefficients, turning the original stochastic problem into a deterministic parametric elliptic problem. A Galerkin finite element method, of either the h- or p-version, then approximates the corresponding deterministic solution, yielding approximations of the desired statistics. We present a priori error estimates and include a comparison of the computational work required by each numerical approximation to achieve a given accuracy. This comparison suggests intuitive conditions for an optimal selection of the numerical approximation.
Aggregation Using the Linguistic Weighted Average and Interval Type-2 Fuzzy Sets The focus of this paper is the linguistic weighted average (LWA), where the weights are always words modeled as interval type-2 fuzzy sets (IT2 FSs), and the attributes may also (but do not have to) be words modeled as IT2 FSs; consequently, the output of the LWA is an IT2 FS. The LWA can be viewed as a generalization of the fuzzy weighted average (FWA) where the type-1 fuzzy inputs are replaced by IT2 FSs. This paper presents the theory, algorithms, and an application of the LWA. It is shown that finding the LWA can be decomposed into finding two FWAs. Since the LWA can model more uncertainties, it should have wide applications in distributed and hierarchical decision-making.
Stability and Instance Optimality for Gaussian Measurements in Compressed Sensing In compressed sensing, we seek to gain information about a vector x∈ℝN from d ≪ N nonadaptive linear measurements. Candes, Donoho, Tao et al. (see, e.g., Candes, Proc. Intl. Congress Math., Madrid, 2006; Candes et al., Commun. Pure Appl. Math. 59:1207–1223, 2006; Donoho, IEEE Trans. Inf. Theory 52:1289–1306, 2006) proposed to seek a good approximation to x via ℓ 1 minimization. In this paper, we show that in the case of Gaussian measurements, ℓ 1 minimization recovers the signal well from inaccurate measurements, thus improving the result from Candes et al. (Commun. Pure Appl. Math. 59:1207–1223, 2006). We also show that this numerically friendly algorithm (see Candes et al., Commun. Pure Appl. Math. 59:1207–1223, 2006) with overwhelming probability recovers the signal with accuracy, comparable to the accuracy of the best k-term approximation in the Euclidean norm when k∼d/ln N.
On Generalized Induced Linguistic Aggregation Operators In this paper, we define various generalized induced linguistic aggregation operators, including eneralized induced linguistic ordered weighted averaging (GILOWA) operator, generalized induced linguistic ordered weighted geometric (GILOWG) operator, generalized induced uncertain linguistic ordered weighted averaging (GIULOWA) operator, generalized induced uncertain linguistic ordered weighted geometric (GIULOWG) operator, etc. Each object processed by these operators consists of three components, where the first component represents the importance degree or character of the second component, and the second component is used to induce an ordering, through the first component, over the third components which are linguistic variables (or uncertain linguistic variables) and then aggregated. It is shown that the induced linguistic ordered weighted averaging (ILOWA) operator and linguistic ordered weighted averaging (LOWA) operator are the special cases of the GILOWA operator, induced linguistic ordered weighted geometric (ILOWG) operator and linguistic ordered weighted geometric (LOWG) operator are the special cases of the GILOWG operator, the induced uncertain linguistic ordered weighted averaging (IULOWA) operator and uncertain linguistic ordered weighted averaging (ULOWA) operator are the special cases of the GIULOWA operator, and that the induced uncertain linguistic ordered weighted geometric (IULOWG) operator and uncertain LOWG operator are the special cases of the GILOWG operator.
Application of FMCDM model to selecting the hub location in the marine transportation: A case study in southeastern Asia Hub location selection problems have become one of the most popular and important issues not only in the truck transportation and the air transportation, but also in the marine transportation. The main focus of this paper is on container transshipment hub locations in southeastern Asia. Transshipment is the fastest growing segment of the containerport market, resulting in significant scope to develop new transshipment terminal capacity to cater for future expected traffic flows. A shipping carrier not only calculates transport distances and operation costs, but also evaluates some qualitative conditions for existing hub locations and then selects an optimal container transshipment hub location in the region. In this paper, a fuzzy multiple criteria decision-making (FMCDM) model is proposed for evaluating and selecting the container transshipment hub port. Finally, the utilization of the proposed FMCDM model is demonstrated with a case study of hub locations in southeastern Asia. The results show that the FMCDM model proposed in this paper can be used to explain the evaluation and decision-making procedures of hub location selection well. In addition, the preferences are calculated for existing hub locations and these are then compared with a new proposed container transshipment hub location in the region, in this instance the Port of Shanghai. Furthermore, a sensitivity analysis is performed.
Subjective Quality Metric For 3d Video Services Three-dimensional (3D) video service is expected to be introduced as a next-generation television service. Stereoscopic video is composed of two 2D video signals for the left and right views, and these 2D video signals are encoded. Video quality between the left and right views is not always consistent because, for example, each view is encoded at a different bit rate. As a result, the video quality difference between the left and right views degrades the quality of stereoscopic video. However, these characteristics have not been thoroughly studied or modeled. Therefore, it is necessary to better understand how the video quality difference affects stereoscopic video quality and to model the video quality characteristics. To do that, we conducted subjective quality assessments to derive subjective video quality characteristics. The characteristics showed that 3D video quality was affected by the difference in video quality between the left and right views, and that when the difference was small, 3D video quality correlated with the highest 2D video quality of the two views. We modeled these characteristics as a subjective quality metric using a training data set. Finally, we verified the performance of our proposed model by applying it to unknown data sets.
1.249984
0.001025
0.000196
0.000013
0.000003
0
0
0
0
0
0
0
0
0
On assuring end-to-end QoE in next generation networks: challenges and a possible solution In next generation networks, voice, data, and multimedia services will be converged onto a single network platform with increasing complexity and heterogeneity of underlying wireless and optical networking systems. These services should be delivered in the most cost- and resource-efficient manner with ensured user satisfaction. To this end, service providers are now switching the focus from network Quality of Service (QoS) to user Quality of Experience (QoE), which describes the overall performance of a network from the user perspective. High network QoS can, in many cases, result in high QoE, but it cannot assure high QoE. Optimizing end-to-end QoE must consider other contributing factors of QoE such as the application-level QoS, the capability of terminal equipment and customer premises networks, and subjective user factors. This article discusses challenges and a possible solution for optimizing end-to-end QoE in Next Generation Networks.
The Need for QoE-driven Interference Management in Femtocell-Overlaid Cellular Networks. Under the current requirements for mobile, ubiquitous and highly reliable communications, internet and mobile communication technologies have converged to an all-Internet Protocol (IP) packet network. This technological evolution is followed by a major change in the cellular networks' architecture, where the traditional wide-range cells (macrocells) coexist with indoor small-sized cells (femtocells). A key challenge for the evolved heterogeneous cellular networks is the mitigation of the generated interferences. In the literature, this problem has been thoroughly studied from the Quality of Service (QoS) point of view, while a study from the user's satisfaction perspective, described under the term "Quality of Experience (QoE)", has not received enough attention yet. In this paper, we study the QoE performance of VoIP calls in a femto-overlaid Long Term Evolution - Advanced (LTE-A) network and we examine how QoE can drive a power controlled interference management scheme.
Evaluation of challenges in human subject studies "in-the-wild" using subjects' personal smartphones The experimental setting of Human Mobile Computer Interaction (HCI) studies is moving from the controlled laboratory to the user's daily-life environments, while employing the users' own smartphones. These studies are challenging for both new and expert researchers in human subject studies in the HCI field. Within the last three years, we conducted three different smartphone-based user studies. From these studies, we have derived key challenges that we successfully overcame during their execution. In this paper, we present the outcomes and explain the adopted solutions for the challenges identified in the design, development and execution, and data analysis phases during the user studies. Our goal is to give newcomers and junior researchers a practical view on our conducted studies, and help practitioners to reflect on their own studies and possibly apply the proposed solutions.
A cloud radio access network with power over fiber toward 5G networks: QoE-guaranteed design and operation While the concept of Cloud Radio Access Networks (C-RANs) presents a promising solution to provide required quality of service (QoS) for the future network environment, i.e., more than 10 Gbps capacity, less than 1 ms latency, and connectivity for numerous devices, it is still susceptible to quality of experience (QoE) problems. Until now, only a few researchers considered the design and operation of C-RANs based on QoE. In this article we describe our envisioned C-RAN based on passive optical networks (PONs) exploiting power over fiber (PoF), which can be installed with low installation cost and is capable of providing communication services without external power supply for remote radio head (RRH), and describe QoE requirement on the envisioned network. For all users in the envisioned network to satisfy their QoE, effective network design and operation approaches are then presented. Our proposed design and operation approaches demonstrate how to construct the envisioned network, i.e. the numbers of RRHs and optical line terminals (OLTs), and sleep scheduling of RRHs for an energy-efficient optical power transmission.
Monitoring IPTV quality of experience in overlay networks using utility functions Service Overlay Networks (SONs) allow new, complex services to be created without the need to alter the underlying physical infrastructure. They represent a strong candidate for delivering IPTV services over the Internet¿s best effort service. The dynamic nature of overlays in terms of freedom of nodes to join and leave the network, and frequent changes in network conditions make their management a tedious, time consuming task. This paper proposes a manager election and selection scheme to elect nodes that will monitor and analyze IPTV Quality of Experience (QoE) delivered over overlay networks. Then, a simple, yet effective, utility function is proposed to analyze the monitored data in order to infer IPTV QoE based on Quality of Service (QoS) parameters. We assess the performance of the proposed schemes using simulations.
Video quality estimator for wireless mesh networks As Wireless Mesh Networks (WMNs) have been increasingly deployed, where users can share, create and access videos with different characteristics, the need for new quality estimator mechanisms has become important because operators want to control the quality of video delivery and optimize their network resources, while increasing the user satisfaction. However, the development of in-service Quality of Experience (QoE) estimation schemes for Internet videos (e.g., real-time streaming and gaming) with different complexities, motions, Group of Picture (GoP) sizes and contents remains a significant challenge and is crucial for the success of wireless multimedia systems. To address this challenge, we propose a real-time quality estimator approach, HyQoE, for real-time multimedia applications. The performance evaluation in a WMN scenario demonstrates the high accuracy of HyQoE in estimating the Mean Opinion Score (MOS). Moreover, the results highlight the lack of performance of the well-known objective methods and the Pseudo-Subjective Quality Assessment (PSQA) approach.
QoX: What is it really? The article puts in order notions related to Quality of Service that are found in documents on service requirements. Apart from presenting a detailed description of QoS itself, it overviews classes of service (CoS) proposed by main standardization bodies and maps them across various transmission technologies. Standards and concepts related to less commonly used, though not less important, terms su...
Video-QoE aware radio resource allocation for HTTP adaptive streaming We consider the problem of scheduling multiple adaptive streaming video users sharing a time-varying wireless channel such as in modern 3GPP Long Term Evolution (LTE) systems. HTTP Adaptive Streaming (HAS) framework is used by the video users for video rate adaptation. HAS is a client driven video delivery solution that has been gaining popularity due to its inherent advantages over other existing solutions. Quality of Experience (QoE) has become the prime performance criterion for media delivery technologies and wireless resource management is a critical part in providing a target QoE for video delivery over wireless systems. We propose a novel cross-layer Video-QoE aware optimization framework for wireless resource allocation that constraints rebuffering probability for adaptive streaming users. We propose a Re-buffering Aware Gradient Algorithm (RAGA) to solve this optimization problem. RAGA relies on simple periodic feedback of media buffer levels by adaptive streaming clients. Our simulation results on an LTE system level simulator demonstrate significant reduction in re-buffering percentage using RAGA without compromising video quality.
Dynamic Adaptive Video Streaming: Towards a Systematic Comparison of ICN and TCP/IP. Streaming of video content over the Internet is experiencing an unprecedented growth. While video permeates every application, it also puts tremendous pressure in the network-to support users having heterogeneous accesses and expecting a high quality of experience, in a furthermore cost-effective manner. In this context, future internet paradigms, such as information centric networking (ICN), are ...
Overlay multicast tree recovery scheme using a proactive approach Overlay multicast scheme has been regarded as an alternative to conventional IP multicast since it can support multicast functions without infrastructural level changes. However, multicast tree reconstruction procedure is required when a non-leaf node fails or leaves. In this paper, we propose a proactive approach to solve the aforementioned defect of overlay multicast scheme by using a resource reservation of some nodes' out-degrees in the tree construction procedure. In our proposal, a proactive route maintenance approach makes it possible to shorten recovery time from parent node's abrupt failure. The simulation results show that proposed approach takes less time than the existing works to reconstruct a similar tree and that it is a more effective way to deal with more nodes that have lost their parent nodes due to failure.
Toward a generalized theory of uncertainty (GTU): an outline It is a deep-seated tradition in science to view uncertainty as a province of probability theory. The generalized theory of uncertainty (GTU) which is outlined in this paper breaks with this tradition and views uncertainty in a much broader perspective.Uncertainty is an attribute of information. A fundamental premise of GTU is that information, whatever its form, may be represented as what is called a generalized constraint. The concept of a generalized constraint is the centerpiece of GTU. In GTU, a probabilistic constraint is viewed as a special-albeit important-instance of a generalized constraint.A generalized constraint is a constraint of the form X isr R, where X is the constrained variable, R is a constraining relation, generally non-bivalent, and r is an indexing variable which identifies the modality of the constraint, that is, its semantics. The principal constraints are: possibilistic (r=blank); probabilistic (r=p); veristic (r=v); usuality (r=u); random set (r=rs); fuzzy graph (r=fg); bimodal (r=bm); and group (r=g). Generalized constraints may be qualified, combined and propagated. The set of all generalized constraints together with rules governing qualification, combination and propagation constitutes the generalized constraint language (GCL).The generalized constraint language plays a key role in GTU by serving as a precisiation language for propositions, commands and questions expressed in a natural language. Thus, in GTU the meaning of a proposition drawn from a natural language is expressed as a generalized constraint. Furthermore, a proposition plays the role of a carrier of information. This is the basis for equating information to a generalized constraint.In GTU, reasoning under uncertainty is treated as propagation of generalized constraints, in the sense that rules of deduction are equated to rules which govern propagation of generalized constraints. A concept which plays a key role in deduction is that of a protoform (abbreviation of prototypical form). Basically, a protoform is an abstracted summary-a summary which serves to identify the deep semantic structure of the object to which it applies. A deduction rule has two parts: symbolic-expressed in terms of protoforms-and computational.GTU represents a significant change both in perspective and direction in dealing with uncertainty and information. The concepts and techniques introduced in this paper are illustrated by a number of examples.
The implementation of quality function deployment based on linguistic data Quality function deployment (QFD) is a customer-driven quality management and product development system for achieving higher customer satisfaction. The QFD process involves various inputs in the form of linguistic data, e.g., human perception, judgment, and evaluation on importance or relationship strength. Such data are usually ambiguous and uncertain. An aim of this paper is to examine the implementation of QFD under a fuzzy environment and to develop corresponding procedures to deal with the fuzzy data. It presented a process model using linguistic variables, fuzzy arithmetic, and defuzzification techniques. Based on an example, this paper further examined the sensitivity of the ranking of technical characteristics to the defuzzification strategy and the degree of fuzziness of fuzzy numbers. Results indicated that selection of the defuzzification strategy and membership function are important. This proposed fuzzy approach allows QFD users to avoid subjective and arbitrary quantification of linguistic data. The paper also presents a scheme to represent and interprete the results.
Fuzzy logic approach to placement problem A contemporary definition of the placement problem is characterized by multiple objectives. These objectives are : minimal area, routability, timing and possibly some others. This paper contains a description of the placement system bared on the fuzzy logic approach. Linguistic variables, their linguistic values arrd membership functions are dej%red. Fuzzy logic rules govern the placement process. Details of implementation and experimental results are provided.
Evidential Reasoning Using Extended Fuzzy Dempster-Shafer Theory For Handling Various Facets Of Information Deficiency This work investigates the problem of combining deficient evidence for the purpose of quality assessment. The main focus of the work is modeling vagueness, ambiguity, and local nonspecificity in information within a unified approach. We introduce an extended fuzzy Dempster-Shafer scheme based on the simultaneous use of fuzzy interval-grade and interval-valued belief degree (IGIB). The latter facilitates modeling of uncertainties in terms of local ignorance associated with expert knowledge, whereas the former allows for handling the lack of information on belief degree assignments. Also, generalized fuzzy sets can be readily transformed into the proposed fuzzy IGIB structure. The reasoning for quality assessment is performed by solving nonlinear optimization problems on fuzzy Dempster-Shafer paradigm for the fuzzy IGIB structure. The application of the proposed inference method is investigated by designing a reasoning scheme for water quality monitoring and validated through the experimental data available for different sampling points in a water distribution network. (C) 2011 Wiley Periodicals, Inc.
1.014052
0.012225
0.011765
0.007647
0.006209
0.00325
0.00086
0.000224
0.00005
0.000001
0
0
0
0
Forecasting the number of outpatient visits using a new fuzzy time series based on weighted-transitional matrix Forecasting the number of outpatient visits can help the expert of healthcare administration to make a strategic decision. If the number of outpatient visits could be forecast accurately, it would provide the administrators of healthcare with a basis to manage hospitals effectively, to make up a schedule for human resources and finances reasonably, and distribute hospital material resources suitably. This paper proposes a new fuzzy time series method, which is based on weighted-transitional matrix, also proposes two new forecasting methods: the Expectation Method and the Grade-Selection Method. From the verification and results, the proposed methods exhibit a relatively lower error rate in comparison to the listing methods, and could be more stable in facing the ever-changing future trends. The characteristics of the proposed methods could overcome the drawback of the insufficient handling of information to construct a forecasting rule in previous researches.
Adaptive-expectation based multi-attribute FTS model for forecasting TAIEX In recent years, there have been many time series methods proposed for forecasting enrollments, weather, the economy, population growth, and stock price, etc. However, traditional time series, such as ARIMA, expressed by mathematic equations are unable to be easily understood for stock investors. Besides, fuzzy time series can produce fuzzy rules based on linguistic value, which is more reasonable than mathematic equations for investors. Furthermore, from the literature reviews, two shortcomings are found in fuzzy time series methods: (1) they lack persuasiveness in determining the universe of discourse and the linguistic length of intervals, and (2) only one attribute (closing price) is usually considered in forecasting, not multiple attributes (such as closing price, open price, high price, and low price). Therefore, this paper proposes a multiple attribute fuzzy time series (FTS) method, which incorporates a clustering method and adaptive expectation model, to overcome the shortcomings above. In verification, using actual trading data of the Taiwan Stock Index (TAIEX) as experimental datasets, we evaluate the accuracy of the proposed method and compare the performance with the (Chen, 1996 [7], Yu, 2005 [6], and Cheng, Cheng, & Wang, 2008 [20]) methods. The proposed method is superior to the listing methods based on average error percentage (MAER).
The Roles of Fuzzy Logic and Soft Computing in the Conception, Design and Deployment of Intelligent Systems The essence of soft computing is that, unlike the traditional, hard computing, it is aimed at an accommodation with the pervasive imprecision of the real world. Thus, the guiding principle of soft computing is: ‘...exploit the tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness, low solution cost and better rapport with reality’. In the final analysis, the role model for soft computing is the human mind.
Forecasting innovation diffusion of products using trend-weighted fuzzy time-series model The time-series models have been used to make reasonably accurate predictions in weather forecasting, academic enrolment, stock price, etc. This study proposes a novel method that incorporates trend-weighting into the fuzzy time-series models advanced by Chen's and Yu's method to explore the extent to which the innovation diffusion of ICT products could be adequately described by the proposed procedure. To verify the proposed procedure, the actual DSL (digital subscriber line) data in Taiwan is illustrated, and this study evaluates the accuracy of the proposed procedure by comparing with different innovation diffusion models: Bass model, Logistic model and Dynamic model. The results show that the proposed procedure surpasses the methods listed in terms of accuracy and SSE (Sum of Squares Error).
A hybrid multi-order fuzzy time series for forecasting stock markets This paper proposes a hybrid model based on multi-order fuzzy time series, which employs rough sets theory to mine fuzzy logical relationship from time series and an adaptive expectation model to adjust forecasting results, to improve forecasting accuracy. Two empirical stock markets (TAIEX and NASDAQ) are used as empirical databases to verify the forecasting performance of the proposed model, and two other methodologies, proposed earlier by Chen and Yu, are employed as comparison models. Besides, to compare with conventional statistic method, the partial autocorrelation function and autoregressive models are utilized to estimate the time lags periods within the databases. Based on comparison results, the proposed model can effectively improve the forecasting performance and outperforms the listing models. From the empirical study, the conventional statistic method and the proposed model both have revealed that the estimated time lags for the two empirical databases are one lagged period.
Fuzzy time-series based on adaptive expectation model for TAIEX forecasting Time-series models have been used to make predictions in the areas of stock price forecasting, academic enrollment and weather, etc. However, in stock markets, reasonable investors will modify their forecasts based on recent forecasting errors. Therefore, we propose a new fuzzy time-series model which incorporates the adaptive expectation model into forecasting processes to modify forecasting errors. Using actual trading data from Taiwan Stock Index (TAIEX) and, we evaluate the accuracy of the proposed model by comparing our forecasts with those derived from Chen's [Chen, S. M. (1996). Forecasting enrollments based on fuzzy time-series, Fuzzy Sets and Systems, 81, 311-319] and Yu's [Yu, Hui-Kuang. (2004). Weighted fuzzy time-series models for TAIEX forecasting. Physica A, 349, 609-624] models. The comparison results indicate that our model surpasses in accuracy those suggested by Chen and Yu.
Fuzzy stochastic fuzzy time series and its models In this paper, as an extension of the concept of time series , we will present the definition and models of fuzzy stochastic fuzzy time series (FSFTS), both of whose values and probabilities with which the FSFTS assumes its values are fuzzy sets, and which may not be modeled properly by the concept of time series. To investigate FSFTS, the definition of fuzzy valued probability distributions is considered and discussed. When the FSFTS is time-invariant, several preliminary conclusions are derived.
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods. Our taxonomy is designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms. We have also produced several new multi-frame stereo data sets with ground truth and are making both the code and data sets available on the Web. Finally, we include a comparative evaluation of a large set of today's best-performing stereo algorithms.
Duality Theory in Fuzzy Linear Programming Problems with Fuzzy Coefficients The concept of fuzzy scalar (inner) product that will be used in the fuzzy objective and inequality constraints of the fuzzy primal and dual linear programming problems with fuzzy coefficients is proposed in this paper. We also introduce a solution concept that is essentially similar to the notion of Pareto optimal solution in the multiobjective programming problems by imposing a partial ordering on the set of all fuzzy numbers. We then prove the weak and strong duality theorems for fuzzy linear programming problems with fuzzy coefficients.
Randomized rounding: a technique for provably good algorithms and algorithmic proofs We study the relation between a class of 0–1 integer linear programs and their rational relaxations. We give a randomized algorithm for transforming an optimal solution of a relaxed problem into a provably good solution for the 0–1 problem. Our technique can be a of extended to provide bounds on the disparity between the rational and 0–1 optima for a given problem instance.
Compound Linguistic Scale. •Compound Linguistic Scale comprises Compound Linguistic Variable, Fuzzy Normal Distribution and Deductive Rating Strategy.•CLV can produce two dimensional options, i.e. compound linguistic terms, to better reflect the raters’ preferences.•DRS is a double step rating approach for a rater to choose a compound linguistic term among two dimensional options.•FND can efficiently produce a population of fuzzy numbers for a linguistic term set with using a few parameters.•CLS, as a rating interface, can be contributed to various application domains in engineer and social sciences.
Looking for a good fuzzy system interpretability index: An experimental approach Interpretability is acknowledged as the main advantage of fuzzy systems and it should be given a main role in fuzzy modeling. Classical systems are viewed as black boxes because mathematical formulas set the mapping between inputs and outputs. On the contrary, fuzzy systems (if they are built regarding some constraints) can be seen as gray boxes in the sense that every element of the whole system can be checked and understood by a human being. Interpretability is essential for those applications with high human interaction, for instance decision support systems in fields like medicine, economics, etc. Since interpretability is not guaranteed by definition, a huge effort has been done to find out the basic constraints to be superimposed during the fuzzy modeling process. People talk a lot about interpretability but the real meaning is not clear. Understanding of fuzzy systems is a subjective task which strongly depends on the background (experience, preferences, and knowledge) of the person who makes the assessment. As a consequence, although there have been a few attempts to define interpretability indices, there is still not a universal index widely accepted. As part of this work, with the aim of evaluating the most used indices, an experimental analysis (in the form of a web poll) was carried out yielding some useful clues to keep in mind regarding interpretability assessment. Results extracted from the poll show the inherent subjectivity of the measure because we collected a huge diversity of answers completely different at first glance. However, it was possible to find out some interesting user profiles after comparing carefully all the answers. It can be concluded that defining a numerical index is not enough to get a widely accepted index. Moreover, it is necessary to define a fuzzy index easily adaptable to the context of each problem as well as to the user quality criteria.
Interactive group decision-making using a fuzzy linguistic approach for evaluating the flexibility in a supply chain ► This study builds a group decision-making structure model of flexibility in supply chain management development. ► This study presents a framework for evaluating supply chain flexibility. ► This study proposes an algorithm for determining the degree of supply chain flexibility using a new fuzzy linguistic approach. ►This fuzzy linguistic approach has more advantage to preserve no loss of information.
The laws of large numbers for fuzzy random variables The new attempt of weak and strong law of large numbers for fuzzy random variables is discussed in this paper by proposing the convergence in probability and convergence with probability one for fuzzy random variables. We first consider the limit properties of fuzzy numbers by invoking the Hausdorff metric, and then we extend it to the convergence in probability and convergence with probability one for fuzzy random variables. We provide the notion of weak and strong convergence in probability and weak and strong convergence with probability one for fuzzy random variables. Finally we come up with the weak and strong law of large numbers for fuzzy random variables in weak and strong sense. (C) 2000 Elsevier Science B.V. All rights reserved.
1.22
0.22
0.22
0.22
0.076667
0.05
0.001875
0
0
0
0
0
0
0
Multiresolution Spatial and Temporal Coding in a Wireless Sensor Network for Long-Term Monitoring Applications In many WSN (wireless sensor network) applications, such as [1], [2], [3], the targets are to provide long-term monitoring of environments. In such applications, energy is a primary concern because sensor nodes have to regularly report data to the sink and need to continuously work for a very long time so that users may periodically request a rough overview of the monitored environment. On the other hand, users may occasionally query more in-depth data of certain areas to analyze abnormal events. These requirements motivate us to propose a multiresolution compression and query (MRCQ) framework to support in-network data compression and data storage in WSNs from both space and time domains. Our MRCQ framework can organize sensor nodes hierarchically and establish multiresolution summaries of sensing data inside the network, through spatial and temporal compressions. In the space domain, only lower resolution summaries are sent to the sink; the other higher resolution summaries are stored in the network and can be obtained via queries. In the time domain, historical data stored in sensor nodes exhibit a finer resolution for more recent data, and a coarser resolution for older data. Our methods consider the hardware limitations of sensor nodes. So, the result is expected to save sensors' energy significantly, and thus, can support long-term monitoring WSN applications. A prototyping system is developed to verify its feasibility. Simulation results also show the efficiency of MRCQ compared to existing work.
Linear compressive networks A linear compressive network (LCN) is defined as a graph of sensors in which each encoding sensor compresses incoming jointly Gaussian random signals and transmits (potentially) low-dimensional linear projections to neighbors over a noisy uncoded channel. Each sensor has a maximum power to allocate over signal subspaces. The networks of focus are acyclic, directed graphs with multiple sources and multiple destinations. LCN pathways lead to decoding leaf nodes that estimate linear functions of the original high dimensional sources by minimizing a mean squared error (MSE) distortion cost function. An iterative optimization of local compressive matrices for all graph nodes is developed using an optimal quadratically constrained quadratic program (QCQP) step. The performance of the optimization is marked by power-compression-distortion spectra, with converse bounds based on cut-set arguments. Exampies include single layer and multi-layer (e.g. p-layer tree cascades, butterfly) networks. The LCN is a generalization of the Karhunen-Loève Transform to noisy multi-layer networks, and extends previous approaches for point-to-point and distributed compression-estimation of Gaussian signals. The framework relates to network coding in the noiseless case, and uncoded transmission in the noisy case.
Spatially-Localized Compressed Sensing and Routing in Multi-hop Sensor Networks We propose energy-efficient compressed sensing for wireless sensor networks using spatially-localized sparse projections. To keep the transmission cost for each measurement low, we obtain measurements from clusters of adjacent sensors. With localized projection, we show that joint reconstruction provides significantly better reconstruction than independent reconstruction. We also propose a metric of energy overlap between clusters and basis functions that allows us to characterize the gains of joint reconstruction for different basis functions. Compared with state of the art compressed sensing techniques for sensor network, our simulation results demonstrate significant gains in reconstruction accuracy and transmission cost.
Practical data compression in wireless sensor networks: A survey Power consumption is a critical problem affecting the lifetime of wireless sensor networks. A number of techniques have been proposed to solve this issue, such as energy-efficient medium access control or routing protocols. Among those proposed techniques, the data compression scheme is one that can be used to reduce transmitted data over wireless channels. This technique leads to a reduction in the required inter-node communication, which is the main power consumer in wireless sensor networks. In this article, a comprehensive review of existing data compression approaches in wireless sensor networks is provided. First, suitable sets of criteria are defined to classify existing techniques as well as to determine what practical data compression in wireless sensor networks should be. Next, the details of each classified compression category are described. Finally, their performance, open issues, limitations and suitable applications are analyzed and compared based on the criteria of practical data compression in wireless sensor networks.
Compressed Sensing for Networked Data Imagine a system with thousands or millions of independent components, all capable of generating and communicating data. A man-made system of this complexity was unthinkable a few decades ago, but today it is a reality - computers, cell phones, sensors, and actuators are all linked to the Internet, and every wired or wireless device is capable of generating and disseminating prodigious volumes of data. This system is not a single centrally-controlled device, rather it is an ever-growing patchwork of autonomous systems and components, perhaps more organic in nature than any human artifact that has come before. And we struggle to manage and understand this creation, which in many ways has taken on a life of its own. Indeed, several international conferences are dedicated to the scientific study of emergent Internet phenomena. This article considers a particularly salient aspect of this struggle that revolves around large- scale distributed sources of data and their storage, transmission, and retrieval. The task of transmitting information from one point to another is a common and well-understood exercise. But the problem of efficiently transmitting or sharing information from and among a vast number of distributed nodes remains a great challenge, primarily because we do not yet have well developed theories and tools for distributed signal processing, communications, and information theory in large-scale networked systems. The problem is illustrated by a simple example. Consider a network of n nodes, each having a piece of information or data xj, j = 1,...,n. These data could be files to be shared, or simply scalar values corresponding to node attributes or sensor measurements. Let us assume that each xj is a scalar quantity for the sake of this illustration. Collectively these data x = (x1,...,xn)T, arranged in a vector, are called networked data to emphasize both the distributed nature of the data and the fact that they may be shared over the underlying communications infrastructure of the network. The networked data vector may be very large; n may be a thousand or a million or more.
Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques, and reconstruc- tion accuracy of the same order as that of LP optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean squared error of the reconstruction is upper bounded by constant multiples of the measurement and signal perturbation energies.
Statistical static timing analysis: how simple can we get? With an increasing trend in the variation of the primary parameters affecting circuit performance, the need for statistical static timing analysis (SSTA) has been firmly established in the last few years. While it is generally accepted that a timing analysis tool should handle parameter variations, the benefits of advanced SSTA algorithms are still questioned by the designer community because of their significant impact on complexity of STA flows. In this paper, we present convincing evidence that a path-based SSTA approach implemented as a post-processing step captures the effect of parameter variations on circuit performance fairly accurately. On a microprocessor block implemented in 90nm technology, the error in estimating the standard deviation of the timing margin at the inputs of sequential elements is at most 0.066 FO4 delays, which translates in to only 0.31% of worst case path delay.
Compressive Sampling and Lossy Compression Recent results in compressive sampling have shown that sparse signals can be recovered from a small number of random measurements. This property raises the question of whether random measurements can provide an efficient representation of sparse signals in an information-theoretic sense. Through both theoretical and experimental results, we show that encoding a sparse signal through simple scalar ...
Variation-aware performance verification using at-speed structural test and statistical timing Meeting the tight performance specifications mandated by the customer is critical for contract manufactured ASICs. To address this, at speed test has been employed to detect subtle delay failures in manufacturing. However, the increasing process spread in advanced nanometer ASICs poses considerable challenges to predicting hardware performance from timing models. Performance verification in the presence of process variation is difficult because the critical path is no longer unique. Different paths become frequency limiting in different process corners. In this paper, we present a novel variation-aware method based on statistical timing to select critical paths for structural test. Node criticalities are computed to determine the probabilities of different circuit nodes being on the critical path across process variation. Moreover, path delays are projected into different process corners using their linear delay function forms. Experimental results for three multimillion gate ASICs demonstrate the effectiveness of our methods.
Fast image recovery using variable splitting and constrained optimization We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.
Directional relative position between objects in image processing: a comparison between fuzzy approaches The importance of describing relationships between objects has been highlighted in works in very different areas, including image understanding. Among these relationships, directional relative position relations are important since they provide an important information about the spatial arrangement of objects in the scene. Such concepts are rather ambiguous, they defy precise definitions, but human beings have a rather intuitive and common way of understanding and interpreting them. Therefore in this context, fuzzy methods are appropriate to provide consistent definitions that integrate both quantitative and qualitative knowledge, thus providing a computational representation and interpretation of imprecise spatial relations, expressed in a linguistic way, and including quantitative knowledge. Several fuzzy approaches have been developed in the literature, and the aim of this paper is to review and compare them according to their properties and according to the types of questions they seek to answer.
Application of FMCDM model to selecting the hub location in the marine transportation: A case study in southeastern Asia Hub location selection problems have become one of the most popular and important issues not only in the truck transportation and the air transportation, but also in the marine transportation. The main focus of this paper is on container transshipment hub locations in southeastern Asia. Transshipment is the fastest growing segment of the containerport market, resulting in significant scope to develop new transshipment terminal capacity to cater for future expected traffic flows. A shipping carrier not only calculates transport distances and operation costs, but also evaluates some qualitative conditions for existing hub locations and then selects an optimal container transshipment hub location in the region. In this paper, a fuzzy multiple criteria decision-making (FMCDM) model is proposed for evaluating and selecting the container transshipment hub port. Finally, the utilization of the proposed FMCDM model is demonstrated with a case study of hub locations in southeastern Asia. The results show that the FMCDM model proposed in this paper can be used to explain the evaluation and decision-making procedures of hub location selection well. In addition, the preferences are calculated for existing hub locations and these are then compared with a new proposed container transshipment hub location in the region, in this instance the Port of Shanghai. Furthermore, a sensitivity analysis is performed.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.105556
0.111111
0.07037
0.066667
0.010431
0.000879
0.000015
0
0
0
0
0
0
0
A general framework for accurate statistical timing analysis considering correlations The impact of parameter variations on timing due to process and environmental variations has become significant in recent years. With each new technology node this variability is becoming more prominent. In this work, we present a general statistical timing analysis (STA) framework that captures spatial correlations between gate delays. The technique presented does not make any assumption about the distributions of the parameter variations, gate delay and arrival times. The authors proposed a Taylor-series expansion based polynomial representation of gate delays and arrival times which is able to effectively capture the non-linear dependencies that arise due to increasing parameter variations. In order to reduce the computational complexity introduced due to polynomial modeling during STA, an efficient linear-modeling driven polynomial STA scheme was proposed. On an average the degree-2 polynomial scheme had a 7.3 × speedup as compared to Monte Carlo with 0.049 units of rms error with respect to Monte Carlo. The technique is generic and could be applied to arbitrary variations in the underlying parameters.
Statistical ordering of correlated timing quantities and its application for path ranking Correct ordering of timing quantities is essential for both timing analysis and design optimization in the presence of process variation, because timing quantities are no longer a deterministic value, but a distribution. This paper proposes a novel metric, called tiered criticalities, which guarantees to provide a unique order for a set of correlated timing quantities while properly taking into account full process space coverage. Efficient algorithms are developed to compute this metric, and its effectiveness on path ranking for at-speed testing is also demonstrated.
Fast statistical timing analysis of latch-controlled circuits for arbitrary clock periods Latch-controlled circuits have a remarkable advantage in timing performance as process variations become more relevant for circuit design. Existing methods of statistical timing analysis for such circuits, however, still need improvement in runtime and their results should be extended to provide yield information for any given clock period. In this paper, we propose a method combining a simplified iteration and a graph transformation algorithm. The result of this method is in a parametric form so that the yield for any given clock period can easily be evaluated. The graph transformation algorithm handles the constraints from nonpositive loops effectively, completely avoiding the heuristics used in other existing methods. Therefore the accuracy of the timing analysis is well maintained. Additionally, the proposed method is much faster than other existing methods. Especially for large circuits it offers about 100 times performance improvement in timing verification.
Speeding up Monte-Carlo Simulation for Statistical Timing Analysis of Digital Integrated Circuits This paper presents a pair of novel techniques to speed-up path-based Monte-Carlo simulation for statistical timing analysis of digital integrated circuits with no loss of accuracy. The presented techniques can be used in isolation or they could be used together. Both techniques can be readily implemented in any statistical timing framework. We compare our proposed Monte-Carlo simulation with traditional Monte-Carlo simulation in a rigourous framework and show that the new method is up to 2 times as efficient as the traditional method.
On hierarchical statistical static timing analysis Statistical static timing analysis deals with the increasing variations in manufacturing processes to reduce the pessimism in the worst case timing analysis. Because of the correlation between delays of circuit components, timing model generation and hierarchical timing analysis face more challenges than in static timing analysis. In this paper, a novel method to generate timing models for combinational circuits considering variations is proposed. The resulting timing models have accurate input-output delays and are about 80% smaller than the original circuits. Additionally, an accurate hierarchical timing analysis method at design level using pre-characterized timing models is proposed. This method incorporates the correlation between modules by replacing independent random variables to improve timing accuracy. Experimental results show that the correlation between modules strongly affects the delay distribution of the hierarchical design and the proposed method has good accuracy compared with Monte Carlo simulation, but is faster by three orders of magnitude.
Statistical Bellman-Ford algorithm with an application to retiming Process variations in digital circuits make sequential circuit timing validation an extremely challenging task. In this paper, a Statistical Bellman-Ford (SBF) algorithm is proposed to compute the longest path length distribution for directed graphs with cycles. Our SBF algorithm efficiently computes the statistical longest path length distribution if there exist no positive cycles or detects one if the circuit is likely to have a positive cycle. An important application of SBF is Statistical Retiming-based Timing Analysis (SRTA), where SBF is used to check for the feasibility of a given target clock period distribution for retiming. Our gate and wire delay distribution model considers several high-impact intra-die process parameters and accurately captures the spatial and reconvergent path correlations. The Monte Carlo simulation is used to validate the accuracy of our SBF algorithm. To the best of our knowledge, this is the first paper that propose the statistic version of the longest path algorithm for sequential circuits.
Predicting circuit performance using circuit-level statistical timing analysis Recognizing that the delay of a circuit is extremely sensitive to manufacturing process variations, this paper proposes a methodology for statistical timing analysis. The authors present a triple-node delay model which inherently captures the effect of input transition time on the gate delays. Response surface methods are used so that the statistical gate delays are generated efficiently. A new path sensitization criterion based on the minimum propagatable pulse width (MPPW) of the gates along a path is used to check for false paths. The overlap of a path with longer paths determines its “statistical significance” to the overall circuit delay. Finally, the circuit delay probability density function is computed by performing a Monte Carlo simulation on the statistically significant path set
Worst-case analysis and optimization of VLSI circuit performances In this paper, we present a new approach for realistic worst-case analysis of VLSI circuit performances and a novel methodology for circuit performance optimization. Circuit performance measures are modeled as response surfaces of the designable and uncontrollable (noise) parameters. Worst-case analysis proceeds by first computing the worst-case circuit performance value and then determining the worst-case noise parameter values by solving a nonlinear programming problem. A new circuit optimization technique is developed to find an optimal design point at which all of the circuit specifications are met under worst-case conditions. This worst-case design optimization method is formulated as a constrained multicriteria optimization. The methodologies described in this paper are applied to several VLSI circuits to demonstrate their accuracy and efficiency
Algorithm 659: Implementing Sobol's quasirandom sequence generator We compare empirically accuracy and speed of low-discrepancy sequence generators of Sobol' and Faure. These generators are useful for multidimensional integration and global optimization. We discuss our implementation of the Sobol' generator.
A stochastic integral equation method for modeling the rough surface effect on interconnect capacitance In This work we describe a stochastic integral equation method for computing the mean value and the variance of capacitance of interconnects with random surface roughness. An ensemble average Green's function is combined with a matrix Neumann expansion to compute nominal capacitance and its variance. This method avoids the time-consuming Monte Carlo simulations and the discretization of rough surfaces. Numerical experiments show that the results of the new method agree very well with Monte Carlo simulation results.
Message-Passing Algorithms For Compressed Sensing Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity-undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity-undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity-undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity-undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity-undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this apparently very different theoretical formalism.
Compressive sensing for sparsely excited speech signals Compressive sensing (CS) has been proposed for signals with sparsity in a linear transform domain. We explore a signal dependent unknown linear transform, namely the impulse response matrix operating on a sparse excitation, as in the linear model of speech production, for recovering compressive sensed speech. Since the linear transform is signal dependent and unknown, unlike the standard CS formulation, a codebook of transfer functions is proposed in a matching pursuit (MP) framework for CS recovery. It is found that MP is efficient and effective to recover CS encoded speech as well as jointly estimate the linear model. Moderate number of CS measurements and low order sparsity estimate will result in MP converge to the same linear transform as direct VQ of the LP vector derived from the original signal. There is also high positive correlation between signal domain approximation and CS measurement domain approximation for a large variety of speech spectra.
Introduction to Queueing Theory and Stochastic Teletraffic Models. The aim of this textbook is to provide students with basic knowledge of stochastic models that may apply to telecommunications research areas, such as traffic modelling, resource provisioning and traffic management. These study areas are often collectively called teletraffic. This book assumes prior knowledge of a programming language, mathematics, probability and stochastic processes normally taught in an electrical engineering course. For students who have some but not sufficiently strong background in probability and stochastic processes, we provide, in the first few chapters, background on the relevant concepts in these areas.
Overview of HEVC High-Level Syntax and Reference Picture Management The increasing proportion of video traffic in telecommunication networks puts an emphasis on efficient video compression technology. High Efficiency Video Coding (HEVC) is the forthcoming video coding standard that provides substantial bit rate reductions compared to its predecessors. In the HEVC standardization process, technologies such as picture partitioning, reference picture management, and parameter sets are categorized as “high-level syntax.” The design of the high-level syntax impacts the interface to systems and error resilience, and provides new functionalities. This paper presents an overview of the HEVC high-level syntax, including network abstraction layer unit headers, parameter sets, picture partitioning schemes, reference picture management, and supplemental enhancement information messages.
1.006986
0.011452
0.011452
0.010611
0.006355
0.004997
0.002502
0.001073
0.000103
0.000005
0
0
0
0
The λ-average value and the fuzzy expectation of a fuzzy random variable The concepts of fuzzy random variable, and the associated fuzzy expected value, have been introduced by Puri and Ralescu as an extension of measurable set-valued functions (random sets), and of the Aumann integral of these functions, respectively. On the other hand, the λ-average function has been suggested by Campos and González as an appropriate function to rank fuzzy numbers. In this paper we are going to analyze some useful properties concerning the λ-average value of the expectation of a fuzzy random variable, and some practical implications of these properties are also commented on.
Comparison of experiments in statistical decision problems with fuzzy utilities The choice of an optimal action in a decision-making problem involving fuzzy utilities has been reduced in most previous studies to modeling the fuzzy utilities and the expected fuzzy utility and selecting a technique for ranking fuzzy numbers that fits the situation. A new element is incorporated in the decision problem, namely, the sample information supplied by a random experiment associated with the decision problem. This element, along with the use of fuzzy random variables to model fuzzy utilities and an appropriate fuzzy preference relation, makes it possible to extend, in the Bayesian framework, the concept of expected value of sample information (or gain in expected fuzzy utility due to knowledge of sample information). A criterion for comparing random experiments associated with the problem is established, and some properties confirming its suitability are analyzed. An example illustrates the application of the procedure. The criterion is contrasted with the pattern criterion based on Blackwell's concept of statistical `sufficiency'
Estimation of a simple linear regression model for fuzzy random variables A generalized simple linear regression statistical/probabilistic model in which both input and output data can be fuzzy subsets of R^p is dealt with. The regression model is based on a fuzzy-arithmetic approach and it considers the possibility of fuzzy-valued random errors. Specifically, the least-squares estimation problem in terms of a versatile metric is addressed. The solutions are established in terms of the moments of the involved random elements by employing the concept of support function of a fuzzy set. Some considerations concerning the applicability of the model are made.
The fuzzy hyperbolic inequality index associated with fuzzy random variables The aim of this paper is focussed on the quantification of the extent of the inequality associated with fuzzy-valued random variables in general populations. For this purpose, the fuzzy hyperbolic inequality index associated with general fuzzy random variables is presented and a detailed discussion of some of the most valuable properties of this index (extending those for classical inequality indices) is given. Two examples illustrating the computation of the fuzzy inequality index are also considered. Some comments and suggestions are finally included.
Bootstrap approach to the multi-sample test of means with imprecise data A bootstrap approach to the multi-sample test of means for imprecisely valued sample data is introduced. For this purpose imprecise data are modelled in terms of fuzzy values. Populations are identified with fuzzy-valued random elements, often referred to in the literature as fuzzy random variables. An example illustrates the use of the suggested method. Finally, the adequacy of the bootstrap approach to test the multi-sample hypothesis of means is discussed through a simulation comparative study.
Tools for fuzzy random variables: Embeddings and measurabilities The concept of fuzzy random variable has been shown to be as a valuable model for handling fuzzy data in statistical problems. The theory of fuzzy-valued random elements provides a suitable formalization for the management of fuzzy data in the probabilistic setting. A concise overview of fuzzy random variables, focussed on the crucial aspects for data analysis, is presented.
Routine design with information content and fuzzy quality function deployment Design can be classified into four basic categories: creative design, innovative design, redesign, and routine design. This paper describes a method for performing routine design by utilizing information content and fuzzy quality function deployment. An attempt has been made to associate with each critical characteristic of a product a value representing the information content, which is a measure of probability that a system can produce the parts as specified by the designer, using a specific manufacturing technology for making the parts. Once the information content of each design alternative is computed, the system will select an alternative with the minimum amount of information content. The proposed method provides us with a means for solving the critical design evaluation and validation problem.
Linguistic measures based on fuzzy coincidence for reaching consensus in group decision making Assuming a linguistic framework, a model for the consensus reaching problem in heterogeneous group decision making is proposed. This model contains two types of linguistic consensus measures: linguistic consensus degrees and linguistic proximities to guide the consensus reaching process. These measures evaluate the current consensus state on three levels of action: level of the pairs of alternatives, level of the alternatives, and level of the relation. They are based on a fuzzy characterization of the concept of coincidence, and they are obtained by means of several conjunction functions for handling linguistic weighted information, the LOWA operator for aggregating linguistic information, and linguistic quantifiers representing the concept of fuzzy majority.
A new fuzzy multiple attributive group decision making methodology and its application to propulsion/manoeuvring system selection problem In this paper, a new fuzzy multiple attribute decision-making (FMADM) method, which is suitable for multiple attributive group decision making (GDM) problems in fuzzy environment, is proposed to deal with the problem of ranking and selection of alternatives. Since the subjectivity, imprecision and vagueness in the estimates of a performance rating enter into multiple attribute decision-making (MADM) problems, fuzzy set theory provides a mathematical framework for modelling vagueness and imprecision. In the proposed approach, an attribute based aggregation technique for heterogeneous group of experts is employed and used for dealing with fuzzy opinion aggregation for the subjective attributes of the decision problem. The propulsion/manoeuvring system selection as a real case study is used to demonstrate the versatility and potential of the proposed method for solving fuzzy multiple attributive group decision-making problems. The proposed method is a generalised model, which can be applied to great variety of practical problems encountered in the naval architecture from propulsion/manoeuvring system selection to warship requirements definition.
Internet of Things (IoT): A vision, architectural elements, and future directions Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fueled by the recent adaptation of a variety of enabling wireless technologies such as RFID tags and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a Cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A Cloud implementation using Aneka, which is based on interaction of private and public Clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.
On interval fuzzy negations There exist infinitely many ways to extend the classical propositional connectives to the set [0,1], preserving their behaviors in the extremes 0 and 1 exactly as in the classical logic. However, it is a consensus that this issue is not sufficient, and, therefore, these extensions must also preserve some minimal logical properties of the classical connectives. The notions of t-norms, t-conorms, fuzzy negations and fuzzy implications taking these considerations into account. In previous works, the author, joint with other colleagues, generalizes these notions to the set U={[a,b]|0@?a@?b@?1}, providing canonical constructions to obtain, for example, interval t-norms that are the best interval representations of t-norms. In this paper, we consider the notion of interval fuzzy negation and generalize, in a natural way, several notions related with fuzzy negations, such as the ones of equilibrium point and negation-preserving automorphism. We show that the main properties of these notions are preserved in those generalizations.
Efficient large-scale power grid analysis based on preconditioned krylov-subspace iterative methods In this paper, we propose preconditioned Krylov-subspace iterative methods to perform efficient DC and transient simulations for large-scale linear circuits with an emphasis on power delivery circuits. We also prove that a circuit with inductors can be simplified from MNA to NA format, and the matrix becomes an s.p.d matrix. This property makes it suitable for the conjugate gradient with incomplete Cholesky decomposition as the preconditioner, which is faster than other direct and iterative methods. Extensive experimental results on large-scale industrial power grid circuits show that our method is over 200 times faster for DC analysis and around 10 times faster for transient simulation compared to SPICE3. Furthermore, our algorithm reduces over 75% of memory usage than SPICE3 while the accuracy is not compromised.
Stochastic dominant singular vectors method for variation-aware extraction In this paper we present an efficient algorithm for variation-aware interconnect extraction. The problem we are addressing can be formulated mathematically as the solution of linear systems with matrix coefficients that are dependent on a set of random variables. Our algorithm is based on representing the solution vector as a summation of terms. Each term is a product of an unknown vector in the deterministic space and an unknown direction in the stochastic space. We then formulate a simple nonlinear optimization problem which uncovers sequentially the most relevant directions in the combined deterministic-stochastic space. The complexity of our algorithm scales with the sum (rather than the product) of the sizes of the deterministic and stochastic spaces, hence it is orders of magnitude more efficient than many of the available state of the art techniques. Finally, we validate our algorithm on a variety of onchip and off-chip capacitance and inductance extraction problems, ranging from moderate to very large size, not feasible using any of the available state of the art techniques.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.07308
0.033333
0.02
0.018708
0.00501
0.000594
0.000031
0
0
0
0
0
0
0
Algorithm 890: Sparco: A Testing Framework for Sparse Reconstruction Sparco is a framework for testing and benchmarking algorithms for sparse reconstruction. It includes a large collection of sparse reconstruction problems drawn from the imaging, compressed sensing, and geophysics literature. Sparco is also a framework for implementing new test problems and can be used as a tool for reproducible research. Sparco is implemented entirely in Matlab, and is released as open-source software under the GNU Public License.
A First-Order Smoothed Penalty Method for Compressed Sensing We propose a first-order smoothed penalty algorithm (SPA) to solve the sparse recovery problem $\min\{\|x\|_1:Ax=b\}$. SPA is efficient as long as the matrix-vector product $Ax$ and $A^{T}y$ can be computed efficiently; in particular, $A$ need not have orthogonal rows. SPA converges to the target signal by solving a sequence of penalized optimization subproblems, and each subproblem is solved using Nesterov's optimal algorithm for simple sets [Yu. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, Kluwer Academic Publishers, Norwell, MA, 2004] and [Yu. Nesterov, Math. Program., 103 (2005), pp. 127-152]. We show that the SPA iterates $x_k$ are $\epsilon$-feasible; i.e. $\|Ax_k-b\|_2\leq\epsilon$ and $\epsilon$-optimal; i.e. $|~\|x_k\|_1-\|x^\ast\|_1|\leq\epsilon$ after $\tilde{\mathcal{O}}(\epsilon^{-\frac{3}{2}})$ iterations. SPA is able to work with $\ell_1$, $\ell_2$, or $\ell_{\infty}$ penalty on the infeasibility, and SPA can be easily extended to solve the relaxed recovery problem $\min\{\|x\|_1:\|Ax-b\|_2\leq\delta\}$.
Optimally Tuned Iterative Reconstruction Algorithms for Compressed Sensing We conducted an extensive computational experiment, lasting multiple CPU-years, to optimally select parameters for two important classes of algorithms for finding sparse solutions of underdetermined systems of linear equations. We make the optimally tuned implementations available at sparselab.stanford.edu; they run ??out of the box?? with no user tuning: it is not necessary to select thresholds or know the likely degree of sparsity. Our class of algorithms includes iterative hard and soft thresholding with or without relaxation, as well as CoSaMP, subspace pursuit and some natural extensions. As a result, our optimally tuned algorithms dominate such proposals. Our notion of optimality is defined in terms of phase transitions, i.e., we maximize the number of nonzeros at which the algorithm can successfully operate. We show that the phase transition is a well-defined quantity with our suite of random underdetermined linear systems. Our tuning gives the highest transition possible within each class of algorithms. We verify by extensive computation the robustness of our recommendations to the amplitude distribution of the nonzero coefficients as well as the matrix ensemble defining the underdetermined system. Our findings include the following. 1) For all algorithms, the worst amplitude distribution for nonzeros is generally the constant-amplitude random-sign distribution, where all nonzeros are the same amplitude. 2) Various random matrix ensembles give the same phase transitions; random partial isometries may give different transitions and require different tuning. 3) Optimally tuned subspace pursuit dominates optimally tuned CoSaMP, particularly so when the system is almost square.
Fast image recovery using variable splitting and constrained optimization We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques, and reconstruc- tion accuracy of the same order as that of LP optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean squared error of the reconstruction is upper bounded by constant multiples of the measurement and signal perturbation energies.
Probing the Pareto Frontier for Basis Pursuit Solutions The basis pursuit problem seeks a minimum one-norm solution of an underdetermined least-squares problem. Basis pursuit denoise (BPDN) fits the least-squares problem only approximately, and a single parameter determines a curve that traces the optimal trade-off between the least-squares fit and the one-norm of the solution. We prove that this curve is convex and continuously differentiable over all points of interest, and show that it gives an explicit relationship to two other optimization problems closely related to BPDN. We describe a root-finding algorithm for finding arbitrary points on this curve; the algorithm is suitable for problems that are large scale and for those that are in the complex domain. At each iteration, a spectral gradient-projection method approximately minimizes a least-squares problem with an explicit one-norm constraint. Only matrix-vector operations are required. The primal-dual solution of this problem gives function and derivative information needed for the root-finding method. Numerical experiments on a comprehensive set of test problems demonstrate that the method scales well to large problems.
Local Linear Convergence for Alternating and Averaged Nonconvex Projections The idea of a finite collection of closed sets having “linearly regular intersection” at a point is crucial in variational analysis. This central theoretical condition also has striking algorithmic consequences: in the case of two sets, one of which satisfies a further regularity condition (convexity or smoothness, for example), we prove that von Neumann’s method of “alternating projections” converges locally to a point in the intersection, at a linear rate associated with a modulus of regularity. As a consequence, in the case of several arbitrary closed sets having linearly regular intersection at some point, the method of “averaged projections” converges locally at a linear rate to a point in the intersection. Inexact versions of both algorithms also converge linearly.
Compressed Sensing: How Sharp Is the Restricted Isometry Property? Compressed sensing (CS) seeks to recover an unknown vector with $N$ entries by making far fewer than $N$ measurements; it posits that the number of CS measurements should be comparable to the information content of the vector, not simply $N$. CS combines directly the important task of compression with the measurement task. Since its introduction in 2004 there have been hundreds of papers on CS, a large fraction of which develop algorithms to recover a signal from its compressed measurements. Because of the paradoxical nature of CS—exact reconstruction from seemingly undersampled measurements—it is crucial for acceptance of an algorithm that rigorous analyses verify the degree of undersampling the algorithm permits. The restricted isometry property (RIP) has become the dominant tool used for the analysis in such cases. We present here an asymmetric form of RIP that gives tighter bounds than the usual symmetric one. We give the best known bounds on the RIP constants for matrices from the Gaussian ensemble. Our derivations illustrate the way in which the combinatorial nature of CS is controlled. Our quantitative bounds on the RIP allow precise statements as to how aggressively a signal can be undersampled, the essential question for practitioners. We also document the extent to which RIP gives precise information about the true performance limits of CS, by comparison with approaches from high-dimensional geometry.
A CHRISTOFFEL FUNCTION WEIGHTED LEAST SQUARES ALGORITHM FOR COLLOCATION APPROXIMATIONS We propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the ( weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis to motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.
Breaking the Curse of Dimensionality, Or How to Use SVD in Many Dimensions For $d$-dimensional tensors with possibly large $d3$, an hierarchical data structure, called the Tree-Tucker format, is presented as an alternative to the canonical decomposition. It has asymptotically the same (and often even smaller) number of representation parameters and viable stability properties. The approach involves a recursive construction described by a tree with the leafs corresponding to the Tucker decompositions of three-dimensional tensors, and is based on a sequence of SVDs for the recursively obtained unfolding matrices and on the auxiliary dimensions added to the initial “spatial” dimensions. It is shown how this format can be applied to the problem of multidimensional convolution. Convincing numerical examples are given.
Principle hessian direction based parameter reduction for interconnect networks with process variation As CMOS technology enters the nanometer regime, the increasing process variation is bringing manifest impact on circuit performance. To accurately take account of both global and local process variations, a large number of random variables (or parameters) have to be incorporated into circuit models. This measure in turn raises the complexity of the circuit models. The current paper proposes a Principle Hessian Direction (PHD) based parameter reduction approach for interconnect networks. The proposed approach relies on each parameter's impact on circuit performance to decide whether keeping or reducing the parameter. Compared with existing principle component analysis(PCA) method, this performance based property provides us a significantly smaller parameter set after reduction. The experimental results also support our conclusions. In interconnect cases, the proposed method reduces 70% of parameters. In some cases (the mesh example in the current paper), the new approach leads to an 85% reduction. We also tested ISCAS benchmarks. In all cases, an average of 53% of reductionis observed with less than 3% error in mean and less than 8% error in variation.
A framework for understanding human factors in web-based electronic commerce The World Wide Web and email are used increasingly for purchasing and selling products. The use of the internet for these functions represents a significant departure from the standard range of information retrieval and communication tasks for which it has most often been used. Electronic commerce should not be assumed to be information retrieval, it is a separate task-domain, and the software systems that support it should be designed from the perspective of its goals and constraints. At present there are many different approaches to the problem of how to support seller and buyer goals using the internet. They range from standard, hierarchically arranged, hyperlink pages to “electronic sales assistants”, and from text-based pages to 3D virtual environments. In this paper, we briefly introduce the electronic commerce task from the perspective of the buyer, and then review and analyse the technologies. A framework is then proposed to describe the design dimensions of electronic commerce. We illustrate how this framework may be used to generate additional, hypothetical technologies that may be worth further exploration.
Lattices of convex normal functions The algebra of truth values of type-2 fuzzy sets is the set of all functions from the unit interval into itself, with operations defined in terms of certain convolutions of these functions with respect to pointwise max and min. This algebra has been studied rather extensively, both from a theoretical and from a practical point of view. It has a number of interesting subalgebras, and this paper is about the subalgebra of all convex normal functions, and closely related ones. These particular algebras are De Morgan algebras, and our concern is principally with their completeness as lattices. A special feature of our treatment is a representation of these algebras as monotone functions with pointwise order, making the operations more intuitive.
Thermal switching error versus delay tradeoffs in clocked QCA circuits The quantum-dot cellular automata (QCA) model offers a novel nano-domain computing architecture by mapping the intended logic onto the lowest energy configuration of a collection of QCA cells, each with two possible ground states. A four-phased clocking scheme has been suggested to keep the computations at the ground state throughout the circuit. This clocking scheme, however, induces latency or delay in the transmission of information from input to output. In this paper, we study the interplay of computing error behavior with delay or latency of computation induced by the clocking scheme. Computing errors in QCA circuits can arise due to the failure of the clocking scheme to switch portions of the circuit to the ground state with change in input. Some of these non-ground states will result in output errors and some will not. The larger the size of each clocking zone, i.e., the greater the number of cells in each zone, the more the probability of computing errors. However, larger clocking zones imply faster propagation of information from input to output, i.e., reduced delay. Current QCA simulators compute just the ground state configuration of a QCA arrangement. In this paper, we offer an efficient method to compute the N-lowest energy modes of a clocked QCA circuit. We model the QCA cell arrangement in each zone using a graph-based probabilistic model, which is then transformed into a Markov tree structure defined over subsets of QCA cells. This tree structure allows us to compute the N-lowest energy configurations in an efficient manner by local message passing. We analyze the complexity of the model and show it to be polynomial in terms of the number of cells, assuming a finite neighborhood of influence for each QCA cell, which is usually the case. The overall low-energy spectrum of multiple clocking zones is constructed by concatenating the low-energy spectra of the individual clocking zones. We demonstrate how the model can be used to study the tradeoff betwee- - n switching errors and clocking zones.
1.050643
0.051412
0.005111
0.004254
0.002913
0.00142
0.000408
0.000103
0.00003
0.000007
0.000001
0
0
0
Correlation-aware statistical timing analysis with non-Gaussian delay distributions Process variations have a growing impact on circuit performance for today's integrated circuit (IC) technologies. The non-Gaussian delay distributions as well as the correlations among delays make statistical timing analysis more challenging than ever. In this paper, the authors presented an efficient block-based statistical timing analysis approach with linear complexity with respect to the circuit size, which can accurately predict non-Gaussian delay distributions from realistic nonlinear gate and interconnect delay models. This approach accounts for all correlations, from manufacturing process dependence, to re-convergent circuit paths to produce more accurate statistical timing predictions. With this approach, circuit designers can have increased confidence in the variation estimates, at a low additional computation cost.
Statistical ordering of correlated timing quantities and its application for path ranking Correct ordering of timing quantities is essential for both timing analysis and design optimization in the presence of process variation, because timing quantities are no longer a deterministic value, but a distribution. This paper proposes a novel metric, called tiered criticalities, which guarantees to provide a unique order for a set of correlated timing quantities while properly taking into account full process space coverage. Efficient algorithms are developed to compute this metric, and its effectiveness on path ranking for at-speed testing is also demonstrated.
Fast statistical timing analysis of latch-controlled circuits for arbitrary clock periods Latch-controlled circuits have a remarkable advantage in timing performance as process variations become more relevant for circuit design. Existing methods of statistical timing analysis for such circuits, however, still need improvement in runtime and their results should be extended to provide yield information for any given clock period. In this paper, we propose a method combining a simplified iteration and a graph transformation algorithm. The result of this method is in a parametric form so that the yield for any given clock period can easily be evaluated. The graph transformation algorithm handles the constraints from nonpositive loops effectively, completely avoiding the heuristics used in other existing methods. Therefore the accuracy of the timing analysis is well maintained. Additionally, the proposed method is much faster than other existing methods. Especially for large circuits it offers about 100 times performance improvement in timing verification.
A New Statistical Timing Analysis Using Gaussian Mixture Models For Delay And Slew Propagated Together In order to improve the performance of the existing statistical timing analysis, slew distributions must be taken into account and a mechanism to propagate them together with delay distributions along signal paths is necessary. This Paper introduces Gaussian mixture models to represent the slew and delay distributions. and proposes a novel algorithm for statistical timing analysis. The algorithm propagates a pair of delay and slew in a given circuit graph, and changes the delay distributions of circuit elements dynamically by propagated stews. The proposed model and algorithm are evaluated by comparing with Monte Carlo simulation. The experimental results show that file accuracy improvement in mu + 3 sigma value of maximum delay is up to 4.5 points from the current statistical timing analysis Using Gaussian distributions.
Statistical timing verification for transparently latched circuits through structural graph traversal Level-sensitive transparent latches are widely used in high-performance sequential circuit designs. Under process variations, the timing of a transparently latched circuit will adapt random delays at runtime due to time borrowing. The central problem to determine the timing yield is to compute the probability of the presence of a positive cycle in the latest latch timing graph. Existing algorithms are either optimistic since cycles are omitted or require iterations that cannot be polynomially bounded. In this paper, we present the first algorithm to compute such probability based on block-based statistical timing analysis that, first, covers all cycles through a structural graph traversal, and second, terminates within a polynomial number of statistical ¿sum¿ and ¿max¿ operations. Experimental results confirm that the proposed approach is effective and efficient.
A probabilistic analysis of pipelined global interconnect under process variations The main thesis of this paper is to perform a reliability based performance analysis for a shared latch inserted global interconnect under uncertainty. We first put forward a novel delay metric named DMA for estimation of interconnect delay probability density function considering process variations. Without considerable loss in accuracy, DMA can achieve high computational efficiency even in a large space of random variables. We then propose a comprehensive probabilistic methodology for sampling transfers, on a shared latch inserted global interconnect, that highly improves the reliability of the interconnect. Improvements up to 125% are observed in the reliability when compared to deterministic sampling approach. It is also shown that dual phase clocking scheme for pipelined global interconnect is able to meet more stringent timing constraints due to its lower latency.
A general framework for accurate statistical timing analysis considering correlations The impact of parameter variations on timing due to process and environmental variations has become significant in recent years. With each new technology node this variability is becoming more prominent. In this work, we present a general statistical timing analysis (STA) framework that captures spatial correlations between gate delays. The technique presented does not make any assumption about the distributions of the parameter variations, gate delay and arrival times. The authors proposed a Taylor-series expansion based polynomial representation of gate delays and arrival times which is able to effectively capture the non-linear dependencies that arise due to increasing parameter variations. In order to reduce the computational complexity introduced due to polynomial modeling during STA, an efficient linear-modeling driven polynomial STA scheme was proposed. On an average the degree-2 polynomial scheme had a 7.3 × speedup as compared to Monte Carlo with 0.049 units of rms error with respect to Monte Carlo. The technique is generic and could be applied to arbitrary variations in the underlying parameters.
Guaranteed passive balancing transformations for model order reduction The major concerns in state-of-the-art model reduction algorithms are: achieving accurate models of sufficiently small size, numerically stable and efficient generation of the models, and preservation of system properties such as passivity. Algorithms, such as PRIMA, generate guaranteed-passive models for systems with special internal structure, using numerically stable and efficient Krylov-subspace iterations. Truncated balanced realization (TBR) algorithms, as used to date in the design automation community, can achieve smaller models with better error control, but do not necessarily preserve passivity. In this paper, we show how to construct TBR-like methods that generate guaranteed passive reduced models and in addition are applicable to state-space systems with arbitrary internal structure.
Parametric yield maximization using gate sizing based on efficient statistical power and delay gradient computation With the increased significance of leakage power and performance variability, the yield of a design is becoming constrained both by power and performance limits, thereby significantly complicating circuit optimization. In this paper, we propose a new optimization method for yield optimization under simultaneous leakage power and performance limits. The optimization approach uses a novel leakage power and performance analysis that is statistical in nature and considers the correlation between leakage power and performance to enable accurate computation of circuit yield under power and delay limits. We then propose a new heuristic approach to incrementally compute the gradient of yield with respect to gate sizes in the circuit with high efficiency and accuracy. We then show how this gradient information can be effectively used by a non-linear optimizer to perform yield optimization. We consider both inter-die and intra-die variations with correlated and random components. The proposed approach is implemented and tested and we demonstrate up to 40% yield improvement compared to a deterministically optimized circuit.
Algorithm 823: Implementing scrambled digital sequences Random scrambling of deterministic (t, m, s)-nets and (t, s)-sequences eliminates their inherent bias while retaining their low-discrepancy properties. This article describes an implementation of two types of random scrambling, one proposed by Owen and another proposed by Faure and Tezuka. The four different constructions of digital sequences implemented are those proposed by Sobol', Faure, Niederreiter, and Niederreiter and Xing. Because the random scrambling involves manipulating all digits of each point, the code must be written carefully to minimize the execution time. Computed root mean square discrepancies of the scrambled sequences are compared to known theoretical results. Furthermore, the performances of these sequences on various test problems are discussed.
Interior-Point Method for Nuclear Norm Approximation with Application to System Identification The nuclear norm (sum of singular values) of a matrix is often used in convex heuristics for rank minimization problems in control, signal processing, and statistics. Such heuristics can be viewed as extensions of $\ell_1$-norm minimization techniques for cardinality minimization and sparse signal estimation. In this paper we consider the problem of minimizing the nuclear norm of an affine matrix-valued function. This problem can be formulated as a semidefinite program, but the reformulation requires large auxiliary matrix variables, and is expensive to solve by general-purpose interior-point solvers. We show that problem structure in the semidefinite programming formulation can be exploited to develop more efficient implementations of interior-point methods. In the fast implementation, the cost per iteration is reduced to a quartic function of the problem dimensions and is comparable to the cost of solving the approximation problem in the Frobenius norm. In the second part of the paper, the nuclear norm approximation algorithm is applied to system identification. A variant of a simple subspace algorithm is presented in which low-rank matrix approximations are computed via nuclear norm minimization instead of the singular value decomposition. This has the important advantage of preserving linear matrix structure in the low-rank approximation. The method is shown to perform well on publicly available benchmark data.
Task-Driven Dictionary Learning Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.
Type-2 Fuzzy Arithmetic Using Alpha-Planes This paper examines type-2 fuzzy arithmetic using interval analysis. It relies heavily on alpha-cuts and alpha-planes. Furthermore, we discuss the use of quasi type-2 fuzzy sets proposed by Mendel and Liu and define quasi type-2 fuzzy numbers. Arithmetic operations of such numbers are defined and a worked example is presented.
Bounding the Dynamic Behavior of an Uncertain System via Polynomial Chaos-based Simulation Parametric uncertainty can represent parametric tolerance, parameter noise or parameter disturbances. The effects of these uncertainties on the time evolution of a system can be extremely significant, mostly when studying closed-loop operation of control systems. The presence of uncertainty makes the modeling process challenging, since it is impossible to express the behavior of the system with a deterministic approach. If the uncertainties can be defined in terms of probability density function, probabilistic approaches can be adopted. In many cases, the most useful aspect is the evaluation of the worst-case scenario, thus limiting the problem to the evaluation of the boundary of the set of solutions. This is particularly true for the analysis of robust stability and performance of a closed-loop system. The goal of this paper is to demonstrate how the polynomial chaos theory (PCT) can simplify the determination of the worst-case scenario, quickly providing the boundaries in time domain. The proposed approach is documented with examples and with the description of the Maple worksheet developed by the authors for the automatic processing in the PCT framework.
1.003564
0.006139
0.006139
0.004557
0.003293
0.002762
0.001927
0.001094
0.000369
0.00003
0
0
0
0
A probabilistic framework for power-optimal repeater insertion in global interconnects under parameter variations This paper addresses the problem of power dissipation during the buffer insertion phase of interconnect performance optimization in nanometer scale designs taking all significant parameter variations into account. The relative effect of different device, interconnect and environmental variations on delay and different components of power has been studied. A probabilistic framework to optimize buffer-interconnect designs under variations has been presented and results are compared with those obtained through simple deterministic optimization. Also, statistical models for delay and power under parameter variations have been developed using linear regression techniques. Under statistical analysis, both power and performance of buffer-interconnect designs are shown to degrade with increasing amount of variations. Also, % error in power estimation for power-optimal repeater designs is shown to be significant if variations are not taken into account. Furthermore, it has been shown that due to variations, significantly higher penalties in delay are needed to operate at power levels similar to those under no variations. Finally, the percentage savings in total power for a given penalty in delay are shown to improve with increasing amount of parameter variations.
PRIMO: probability interpretation of moments for delay calculation Moments of the impulse response are widely used for interconnect delay analysis, from the explicit Elmore delay (first moment of the impulse response) expression, to moment matching methods which create reduced order transimpedance and transfer function approximations. However, the Elmore delay is fast becoming ineffective for deep submicron technologies, and reduced order transfer function delays are impractical for use as early-phase design metrics or as design optimization cost functions. This paper describes an approach for fitting moments of the impulse response to probability density functions so that delays can be estimated from probability tables. For RC trees it is demonstrated that the incomplete gamma function provides a provably stable approximation. The step response delay is obtained from a one-dimensional table lookup.
h-gamma: an RC delay metric based on a gamma distribution approximation of the homogeneous response Recently a probability interpretation of moments wm proposed as a compromise between the Elmore delay and higher or&r mommt matchingfor RC timing estimation(5). By modeling RC impufses as tinze-sht~ed incomplete gamma distribution functions, the delays could be obtained via table 100tip using a gamma integral table and the first three moments of the imptise response. However, while this approximation worti well for many examples, it strug- gles with responses when the metal resistance becomes dominant, andproduces results with impractical h.meshz~ values. In this paper the probability interpretation is etiended to the circuit homogeneous response, without requiring the time shi$ parameter The gamma distribution is used to characterize the nor- malized homogeneous portion of the step response. For a general- ized RC interconnect model (RC tree or mesh), the stability of the holtlogetleol{s-gml?ta distribution model is guaranteed. It is dem- onstrated that when a table model is carefilly constructed the h- gamma approxinrationprovidesfor exellent improvemeti over the Elmore delay in terms of accur~, with ve~ little additiond cost in terms of CPU time.
Stochastic analysis of interconnect performance in the presence of process variations Deformations in interconnect due to process variations can lead to significant performance degradation in deep sub-micron circuits. Timing analyzers attempt to capture the effects of variation on delay with simplified models. The timing verification of RC or RLC networks requires the substitution of such simplified models with spatial stochastic processes that capture the random nature of process variations. The present work proposes a new and viable method to compute the stochastic response of interconnects. The technique models the stochastic response in an infinite dimensional Hilbert space in terms of orthogonal polynomial expansions. A finite representation is obtained by using the Galerkin approach of minimizing the Hilbert space norm of the residual error. The key advance of the proposed method is that it provides a functional representation of the response of the system in terms of the random variables that represent the process variations. The proposed algorithm has been implemented in a procedure called OPERA, results from OPERA simulations on commercial design test cases match well with those from the classical Monte Carlo SPICE simulations and from perturbation methods. Additionally OPERA shows good computational efficiency: speedup factor of 60 has been observed over Monte Carlo SPICE simulations.
Predicting Circuit Performance Using Circuit-level Statistical Timing Analysis
Model reduction of variable-geometry interconnects using variational spectrally-weighted balanced truncation This paper presents a spectrally-weighted balanced truncation technique for RLC interconnects, a technique needed when the interconnect circuit parameters change as a result of variations in the manufacturing process. The salient features of this algorithm are the inclusion of parameter variations in the RLC interconnect, the guaranteed stability of the reduced transfer function, and the availability of provable frequency-weighted error bounds for the reduced-order system. This paper shows that the balanced truncation technique is an effective model-order reduction technique when variations in the circuit parameters are taken into consideration. Experimental results show that the new variational spectrally-weighted balanced truncation attains, on average, 20% more accuracy than the variational Krylov-subspace-based model-order reduction techniques while the run-time is also, on average, 5% faster.
Statistical static timing analysis: how simple can we get? With an increasing trend in the variation of the primary parameters affecting circuit performance, the need for statistical static timing analysis (SSTA) has been firmly established in the last few years. While it is generally accepted that a timing analysis tool should handle parameter variations, the benefits of advanced SSTA algorithms are still questioned by the designer community because of their significant impact on complexity of STA flows. In this paper, we present convincing evidence that a path-based SSTA approach implemented as a post-processing step captures the effect of parameter variations on circuit performance fairly accurately. On a microprocessor block implemented in 90nm technology, the error in estimating the standard deviation of the timing margin at the inputs of sequential elements is at most 0.066 FO4 delays, which translates in to only 0.31% of worst case path delay.
Logical structure of fuzzy IF-THEN rules This paper provides a logical basis for manipulation with fuzzy IF-THEN rules. Our theory is wide enough and it encompasses not only finding a conclusion by means of the compositional rule of inference due to Lotfi A. Zadeh but also other kinds of approximate reasoning methods, e.g., perception-based deduction, provided that there exists a possibility to characterize them within a formal logical system. In contrast with other approaches employing variants of multiple-valued first-order logic, the approach presented here employs fuzzy type theory of V. Novák which has sufficient expressive power to present the essential concepts and results in a compact, elegant and justifiable form. Within the effectively formalized representation developed here, based on a complete logical system, it is possible to reconstruct numerous well-known properties of CRI-related fuzzy inference methods, albeit not from the analytic point of view as usually presented, but as formal derivations of the logical system employed. The authors are confident that eventually all relevant knowledge about fuzzy inference methods based on fuzzy IF-THEN rule bases will be represented, formalized and backed up by proof within the well-founded logical representation presented here. An immediate positive consequence of this approach is that suddenly all elements of a fuzzy inference method based on fuzzy IF-THEN rules are ‘first class citizens´ of the representation: there are clear, logically founded definitions for fuzzy IF-THEN rule bases to be consistent, complete, or independent.
Face recognition: eigenface, elastic matching, and neural nets This paper is a comparative study of three recently proposed algorithms for face recognition: eigenface, autoassociation and classification neural nets, and elastic matching. After these algorithms were analyzed under a common statistical decision framework, they were evaluated experimentally on four individual data bases, each with a moderate subject size, and a combined data base with more than a hundred different subjects. Analysis and experimental results indicate that the eigenface algorithm, which is essentially a minimum distance classifier, works well when lighting variation is small. Its performance deteriorates significantly as lighting variation increases. The elastic matching algorithm, on the other hand, is insensitive to lighting, face position, and expression variations and therefore is more versatile. The performance of the autoassociation and classification nets is upper bounded by that of the eigenface but is more difficult to implement in practice
Impact of interconnect variations on the clock skew of a gigahertz microprocessor Due to the large die sizes and tight relative clock skew margins, the impact of interconnect manufacturing variations on the clock skew in today's gigahertz microprocessors can no longer be ignored. Unlike manufacturing variations in the devices, the impact of the interconnect manufacturing variations on IC timing performance cannot be captured by worst/best case corner point methods. Thus it is difficult to estimate the clock skew variability due to interconnect variations. In this paper we analyze the timing impact of several key statistically independent interconnect variations in a context-dependent manner by applying a previously reported interconnect variational order-reduction technique. The results show that the interconnect variations can cause up to 25% clock skew variability in a modern microprocessor design.
On proactive perfectly secure message transmission This paper studies the interplay of network connectivity and perfectly secure message transmission under the corrupting influence of a Byzantine mobile adversary that may move from player to player but can corrupt no more than t players at any given time. It is known that, in the stationary adversary model where the adversary corrupts the same set of t players throughout the protocol, perfectly secure communication among any pair of players is possible if and only if the underlying synchronous network is (2t + 1)-connected. Surprisingly, we show that (2t + 1)-connectivity is sufficient (and of course, necessary) even in the proactive (mobile) setting where the adversary is allowed to corrupt different sets of t players in different rounds of the protocol. In other words, adversarial mobility has no effect on the possibility of secure communication. Towards this, we use the notion of a Communication Graph, which is useful in modelling scenarios with adversarial mobility. We also show that protocols for reliable and secure communication proposed in [15] can be modified to tolerate the mobile adversary. Further these protocols are round-optimal if the underlying network is a collection of disjoint paths from the sender S to receiver R.
Rule-base structure identification in an adaptive-network-based fuzzy inference system We summarize Jang's architecture of employing an adaptive network and the Kalman filtering algorithm to identify the system parameters. Given a surface structure, the adaptively adjusted inference system performs well on a number of interpolation problems. We generalize Jang's basic model so that it can be used to solve classification problems by employing parameterized t-norms. We also enhance the model to include weights of importance so that feature selection becomes a component of the modeling scheme. Next, we discuss two ways of identifying system structures based on Jang's architecture: the top-down approach, and the bottom-up approach. We introduce a data structure, called a fuzzy binary boxtree, to organize rules so that the rule base can be matched against input signals with logarithmic efficiency. To preserve the advantage of parallel processing assumed in fuzzy rule-based inference systems, we give a parallel algorithm for pattern matching with a linear speedup. Moreover, as we consider the communication and storage cost of an interpolation model. We propose a rule combination mechanism to build a simplified version of the original rule base according to a given focus set. This scheme can be used in various situations of pattern representation or data compression, such as in image coding or in hierarchical pattern recognition
Process variability-aware transient fault modeling and analysis Due to reduction in device feature size and supply voltage, the sensitivity of digital systems to transient faults is increasing dramatically. As technology scales further, the increase in transistor integration capacity also leads to the increase in process and environmental variations. Despite these difficulties, it is expected that systems remain reliable while delivering the required performance. Reliability and variability are emerging as new design challenges, thus pointing to the importance of modeling and analysis of transient faults and variation sources for the purpose of guiding the design process. This work presents a symbolic approach to modeling the effect of transient faults in digital circuits in the presence of variability due to process manufacturing. The results show that using a nominal case and not including variability effects, can underestimate the SER by 5% for the 50% yield point and by 10% for the 90% yield point.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.24
0.034388
0.026746
0.004286
0.000357
0.00004
0
0
0
0
0
0
0
0
Uncertain linguistic fuzzy soft sets and their applications in group decision making A novel concept of uncertain linguistic fuzzy soft sets (ULFSSs) is developed.Some traditional set operations of such fuzzy soft sets are discussed.Some algebra operations and the aggregation process of ULFSSs are studied.The application of ULFSSs in decision making is considered. The aim of this paper is to develop a novel concept of uncertain linguistic fuzzy soft sets (ULFSSs), which applies the notion of uncertain fuzzy set to the soft theory. The relationships between two ULFSSs, including the inclusion relation, the equal relation and the complement relation, are studied based on the binary relations. We also introduce some basic set operations for ULFSSs, such as the 'AND' and 'OR' operations, the algebra operations. The properties of these operations are also discussed. As an application of this new fuzzy soft set, we propose a ULFSSs-based group decision making model, in which the weights of decision makers are obtained from a non-linear optimization model according to the 'Technique for Order of Preference by Similarity to Ideal Solution' (TOPSIS) method and the maximum entropy theory. Finally, an assessment of sound quality problem is investigated to illustrate the feasibility and validity of the approach mentioned in this paper.
2-Tuple linguistic soft set and its application to group decision making The aim of this paper is to put forward the 2-tuple linguistic soft set by combining the concepts of 2-tuple linguistic term set and soft set. The traditional set operations and corresponding properties are investigated. We develop the algebraic operations and discuss their corresponding properties based on which we introduce the applications of this theory in solving decision making problems. Four algorithms using the notion of 2-tuple linguistic soft information aggregation function are developed to handle group decision making problem. Finally, a selection problem of investment strategy is shown to illustrate the feasibility and validity of our approach.
A Hesitant Fuzzy Linguistic Todim Method Based On A Score Function Hesitant fuzzy linguistic term sets (HFLTSs) are very useful for dealing with the situations in which the decision makers hesitate among several linguistic terms to assess an alternative. Some multi-criteria decision-making (MCDM) methods have been developed to deal with HFLTSs. These methods are derived under the assumption that the decision maker is completely rational and do not consider the decision maker's psychological behavior. But some studies about behavioral experiments have shown that the decision maker is bounded rational in decision processes and the behavior of the decision maker plays an important role in decision analysis. In this paper, we extend the classical TODIM (an acronym in Portuguese of interactive and multi-criteria decision-making) method to solve MCDM problems dealing with HFLTSs and considering the decision maker's psychological behavior. A novel score function to compare HFLTSs more effectively is defined. This function is also used in the proposed TODIM method. Finally, a decision-making problem that concerns the evaluation and ranking of several telecommunications service providers is used to illustrate the validity and applicability of the proposed method.
On the use of multiplicative consistency in hesitant fuzzy linguistic preference relations. As a new preference structure, the hesitant fuzzy linguistic preference relation (HFLPR) was recently introduced by Rodríguez, Martínez, and Herrera to efficiently address situations in which the decision makers (DMs) are hesitant about several possible linguistic terms for the preference degrees over paired comparisons of alternatives. In this paper, we define the multiplicative consistency of HFLPRs to ensure that the DMs are being neither random nor illogical, and propose a characterization about the multiplicative consistency of HFLPRs. A consistency index is defined to measure the deviation degree between a HFLPR and its multiplicative consistent HFLPR. For a HFLPR with unacceptably multiplicative consistency, we develop a consistency-improving process to adjust it into an acceptably multiplicative one. Moreover, we use the hesitant fuzzy linguistic aggregation operators to aggregate preferences in the acceptably multiplicative HFLPR to obtain the ranking results. Several illustrative examples are further provided to verify the developed methods. Finally, a comparison with other methods in the existing literature is performed to illustrate the advantages of the new methods.
Interval-valued hesitant fuzzy linguistic sets and their applications in multi-criteria decision-making problems. An interval-valued hesitant fuzzy linguistic set (IVHFLS) can serve as an extension of both a linguistic term set and an interval-valued hesitant fuzzy set. This new set combines quantitative evaluation with qualitative evaluation; these can describe the real preferences of decision-makers and reflect their uncertainty, hesitancy, and inconsistency. This work focuses on multi-criteria decision-making (MCDM) problems in which the criteria are in different priority levels and the criteria values take the form of interval-valued hesitant fuzzy linguistic numbers (IVHFLNs). The new approach to solving these problems is based on the prioritized aggregation operators of IVHFLNs. Having reviewed the relevant literature, we provide interval-valued hesitant fuzzy linguistic operations and apply some linguistic scale functions, which have been improved on the basis of psychological theory and prospect theory. Ultimately, two kinds of prioritized aggregation operators of IVHFLNs are developed, which extend to a grouping prioritized situation and are applied to MCDM problems. Finally, an example is provided to illustrate and verify the proposed approach in two separate situations, which are then compared to other representative methods.
Anaphora for everyone: pronominal anaphora resoluation without a parser We present an algorithm for anaphora resolution which is a modified and extended version of that developed by (Lappin and Leass, 1994). In contrast to that work, our algorithm does not require in-depth, full, syntactic parsing of text. Instead, with minimal compromise in output quality, the modifications enable the resolution process to work from the output of a part of speech tagger, enriched only with annotations of grammatical function of lexical items in the input text stream. Evaluation of the results of our implementation demonstrates that accurate anaphora resolution can be realized within natural language processing frameworks which do not---or cannot--- employ robust and reliable parsing components.
Singularity detection and processing with wavelets The mathematical characterization of singularities with Lipschitz exponents is reviewed. Theorems that estimate local Lipschitz exponents of functions from the evolution across scales of their wavelet transform are reviewed. It is then proven that the local maxima of the wavelet transform modulus detect the locations of irregular structures and provide numerical procedures to compute their Lipschitz exponents. The wavelet transform of singularities with fast oscillations has a particular behavior that is studied separately. The local frequency of such oscillations is measured from the wavelet transform modulus maxima. It has been shown numerically that one- and two-dimensional signals can be reconstructed, with a good approximation, from the local maxima of their wavelet transform modulus. As an application, an algorithm is developed that removes white noises from signals by analyzing the evolution of the wavelet transform maxima across scales. In two dimensions, the wavelet transform maxima indicate the location of edges in images.<>
Cubature Kalman Filters In this paper, we present a new nonlinear filter for high-dimensional state estimation, which we have named the cubature Kalman filter (CKF). The heart of the CKF is a spherical-radial cubature rule, which makes it possible to numerically compute multivariate moment integrals encountered in the nonlinear Bayesian filter. Specifically, we derive a third-degree spherical-radial cubature rule that provides a set of cubature points scaling linearly with the state-vector dimension. The CKF may therefore provide a systematic solution for high-dimensional nonlinear filtering problems. The paper also includes the derivation of a square-root version of the CKF for improved numerical stability. The CKF is tested experimentally in two nonlinear state estimation problems. In the first problem, the proposed cubature rule is used to compute the second-order statistics of a nonlinearly transformed Gaussian random variable. The second problem addresses the use of the CKF for tracking a maneuvering aircraft. The results of both experiments demonstrate the improved performance of the CKF over conventional nonlinear filters.
Efficient approximation of random fields for numerical applications This article is dedicated to the rapid computation of separable expansions for the approximation of random fields. We consider approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. Especially, we provide an a posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples are provided to validate and quantify the presented methods. Copyright (c) 2015 John Wiley & Sons, Ltd.
Stereo image quality: effects of mixed spatio-temporal resolution We explored the response of the human visual system to mixed-resolution stereo video-sequences, in which one eye view was spatially or temporally low-pass filtered. It was expected that the perceived quality, depth, and sharpness would be relatively unaffected by low-pass filtering, compared to the case where both eyes viewed a filtered image. Subjects viewed two 10-second stereo video-sequences, in which the right-eye frames were filtered vertically (V) and horizontally (H) at 1/2 H, 1/2 V, 1/4 H, 1/4 V, 1/2 H 1/2 V, 1/2 H 1/4 V, 1/4 H 1/2 V, and 1/4 H 1/4 V resolution. Temporal filtering was implemented for a subset of these conditions at 1/2 temporal resolution, or with drop-and-repeat frames. Subjects rated the overall quality, sharpness, and overall sensation of depth. It was found that spatial filtering produced acceptable results: the overall sensation of depth was unaffected by low-pass filtering, while ratings of quality and of sharpness were strongly weighted towards the eye with the greater spatial resolution. By comparison, temporal filtering produced unacceptable results: field averaging and drop-and-repeat frame conditions yielded images with poor quality and sharpness, even though perceived depth was relatively unaffected. We conclude that spatial filtering of one channel of a stereo video-sequence may be an effective means of reducing the transmission bandwidth
Sparse Reconstruction by Separable Approximation Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.
A review on the design and optimization of interval type-2 fuzzy controllers A review of the methods used in the design of interval type-2 fuzzy controllers has been considered in this work. The fundamental focus of the work is based on the basic reasons for optimizing type-2 fuzzy controllers for different areas of application. Recently, bio-inspired methods have emerged as powerful optimization algorithms for solving complex problems. In the case of designing type-2 fuzzy controllers for particular applications, the use of bio-inspired optimization methods have helped in the complex task of finding the appropriate parameter values and structure of the fuzzy systems. In this review, we consider the application of genetic algorithms, particle swarm optimization and ant colony optimization as three different paradigms that help in the design of optimal type-2 fuzzy controllers. We also mention alternative approaches to designing type-2 fuzzy controllers without optimization techniques. We also provide a comparison of the different optimization methods for the case of designing type-2 fuzzy controllers.
Fuzzy Power Command Enhancement in Mobile Communications Systems
On Fuzziness, Its Homeland and Its Neighbour
1.2
0.2
0.028571
0.022222
0.0125
0
0
0
0
0
0
0
0
0
Admissible orders of typical hesitant fuzzy elements and their application in ordered information fusion in multi-criteria decision making •Develop the total orders (called admissible orders) of HFEs for MCDM.•Derive the distinct rankings of HFEs from different admissible orders.•Redefine the hesitant fuzzy OWA operator based on the proposed total orders.•Investigate a series of desirable properties of the operator.
Fuzzy VIKOR method: A case study of the hospital service evaluation in Taiwan. This study proposes a framework based on the concept of fuzzy sets theory and the VIKOR method to provide a rational, scientific and systematic process for evaluating the hospital service quality under a fuzzy environment where the uncertainty, subjectivity and vagueness are addressed with linguistic variables parameterized by triangular fuzzy numbers. This study applies the fuzzy multi-criteria decision making approach to determine the importance weights of evaluation criteria and the VIKOR method is taken to consolidate the service quality performance ratings of the feasible alternatives. An empirical case involving 33 evaluation criteria, 2 public and 3 private medical centres in Taiwan assessed by 18 evaluators from various fields of medical industry is solicited to demonstrate the proposed approach. The analysis result reveals that the service quality of private hospitals is better than public hospitals because the private hospitals are rarely subsidized by governmental agencies. These private hospitals have to fend themselves to retain existing patients or attract new patients to ensue sustainable survival.
New distance and similarity measures on hesitant fuzzy sets and their applications in multiple criteria decision making. The distance measures of hesitant fuzzy elements (HFEs) h 1 ( x ) and h 2 ( x ) introduced in the literature only cover the divergence of the values, but fail to consider the difference between the numbers of value of h 1 ( x ) and h 2 ( x ) . However, the main characteristic of HFE is that it can describe the hesitant situations flexibly. Such a hesitation is depicted by the number of values of HFE being greater than one. Hence, it is very necessary to take into account both the difference of the values and that of the numbers when we study the difference between the HFEs. In this paper, we introduce the concept of hesitance degree of hesitant fuzzy element which describes the decision maker¿s hesitance in decision making process. Several novel distance and similarity measures between hesitant fuzzy sets (HFSs) are developed, in which both the values and the numbers of values of HFE are taken into account. The properties of the distance measures are discussed. Finally, we apply our proposed distance measures in multiple criteria decision making to illustrate their validity and applicability.
Multi-attribute group decision making method based on geometric Bonferroni mean operator of trapezoidal interval type-2 fuzzy numbers The concepts of interval-valued possibility mean value of IT2 FS are introduced.A new fuzzy geometric Bonferroni mean aggregation operator of IT2 FS is proposed.Present a new method to handle FMAGDM problems under interval type-2 fuzzy environment. In this paper, we investigate the fuzzy multi-attribute group decision making (FMAGDM) problems in which all the information provided by the decision makers (DMs) is expressed as the trapezoidal interval type-2 fuzzy sets (IT2 FS). We introduce the concepts of interval possibility mean value and present a new method for calculating the possibility degree of two trapezoidal IT2 FS. Then, we develop two aggregation techniques called the trapezoidal interval type-2 fuzzy geometric Bonferroni mean (TIT2FGBM) operator and the trapezoidal interval type-2 fuzzy weighted geometric Bonferroni mean (TIT2FWGBM) operator. We study its properties and discuss its special cases. Based on the TIT2FWGBM operator and the possibility degree, the method of FMAGDM with trapezoidal interval type-2 fuzzy information is proposed. Finally, an illustrative example is given to verify the developed approaches and to demonstrate their practicality and effectiveness.
A Hesitant Fuzzy Linguistic Todim Method Based On A Score Function Hesitant fuzzy linguistic term sets (HFLTSs) are very useful for dealing with the situations in which the decision makers hesitate among several linguistic terms to assess an alternative. Some multi-criteria decision-making (MCDM) methods have been developed to deal with HFLTSs. These methods are derived under the assumption that the decision maker is completely rational and do not consider the decision maker's psychological behavior. But some studies about behavioral experiments have shown that the decision maker is bounded rational in decision processes and the behavior of the decision maker plays an important role in decision analysis. In this paper, we extend the classical TODIM (an acronym in Portuguese of interactive and multi-criteria decision-making) method to solve MCDM problems dealing with HFLTSs and considering the decision maker's psychological behavior. A novel score function to compare HFLTSs more effectively is defined. This function is also used in the proposed TODIM method. Finally, a decision-making problem that concerns the evaluation and ranking of several telecommunications service providers is used to illustrate the validity and applicability of the proposed method.
Topsis For Hesitant Fuzzy Linguistic Term Sets We propose a new method to aggregate the opinion of experts or decision makers on different criteria, regarding a set of alternatives, where the opinion of the experts is represented by hesitant fuzzy linguistic term sets. An illustrative example is provided to elaborate the proposed method for selection of the best alternative. (C) 2013 Wiley Periodicals, Inc.
Multi-criteria decision-making based on hesitant fuzzy linguistic term sets: An outranking approach Hesitant fuzzy linguistic term sets (HFLTSs) are introduced to express the hesitance existing in linguistic evaluation as clearly as possible. However, most existing methods using HFLTSs simply rely on the labels or intervals of linguistic terms, which may lead to information distortion and/or loss. To avoid this problem, linguistic scale functions are employed in this paper to conduct the transformation between qualitative information and quantitative data. Moreover, the directional Hausdorff distance, which uses HFLTSs, is also proposed and the dominance relations are subsequently defined using this distance. An outranking approach, similar to the ELECTRE method, is constructed for ranking alternatives in multi-criteria decision-making (MCDM) problems, and the approach is demonstrated using a numerical example related to supply chain management. Because of the inherent features of the directional Hausdorff distance and the defined dominance relations, this approach can effectively and efficiently overcome the hidden drawbacks that may hamper the use of HFLTSs. Finally, the accuracy and effectiveness of the proposed approach is further tested through sensitivity and comparative analyses.
A Definition of a Nonprobabilistic Entropy in the Setting of Fuzzy Sets Theory
A probabilistic definition of a nonconvex fuzzy cardinality The existing methods to assess the cardinality of a fuzzy set with finite support are intended to preserve the properties of classical cardinality. In particular, the main objective of researchers in this area has been to ensure the convexity of fuzzy cardinalities, in order to preserve some properties based on the addition of cardinalities, such as the additivity property. We have found that in order to solve many real-world problems, such as the induction of fuzzy rules in Data Mining, convex cardinalities are not always appropriate. In this paper, we propose a possibilistic and a probabilistic cardinality of a fuzzy set with finite support. These cardinalities are not convex in general, but they are most suitable for solving problems and, contrary to the generalizing opinion, they are found to be more intuitive for humans. Their suitability relies mainly on the fact that they assume dependency among objects with respect to the property "to be in a fuzzy set". The cardinality measures are generalized to relative ones among pairs of fuzzy sets. We also introduce a definition of the entropy of a fuzzy set by using one of our probabilistic measures. Finally, a fuzzy ranking of the cardinality of fuzzy sets is proposed, and a definition of graded equipotency is introduced.
Evaluating the informative quality of documents in SGML format from judgements by means of fuzzy linguistic techniques based on computing with words Recommender systems evaluate and filter the great amount of information available on the Web to assist people in their search processes. A fuzzy evaluation method of Standard Generalized Markup Language documents based on computing with words is presented. Given a document type definition (DTD), we consider that its elements are not equally informative. This is indicated in the DTD by defining linguistic importance attributes to the more meaningful elements of DTD chosen. Then, the evaluation method generates linguistic recommendations from linguistic evaluation judgements provided by different recommenders on meaningful elements of DTD. To do so, the evaluation method uses two quantifier guided linguistic aggregation operators, the linguistic weighted averaging operator and the linguistic ordered weighted averaging operator, which allow us to obtain recommendations taking into account the fuzzy majority of the recommenders' judgements. Using the fuzzy linguistic modeling the user-system interaction is facilitated and the assistance of system is improved. The method can be easily extended on the Web to evaluate HyperText Markup Language and eXtensible Markup Language documents.
Distributed sampling of signals linked by sparse filtering: theory and applications We study the distributed sampling and centralized reconstruction of two correlated signals, modeled as the input and output of an unknown sparse filtering operation. This is akin to a Slepian-Wolf setup, but in the sampling rather than the lossless compression case. Two different scenarios are considered: In the case of universal reconstruction, we look for a sensing and recovery mechanism that works for all possible signals, whereas in what we call almost sure reconstruction, we allow to have a small set (with measure zero) of unrecoverable signals. We derive achievability bounds on the number of samples needed for both scenarios. Our results show that, only in the almost sure setup can we effectively exploit the signal correlations to achieve effective gains in sampling efficiency. In addition to the above theoretical analysis, we propose an efficient and robust distributed sampling and reconstruction algorithm based on annihilating filters. We evaluate the performance of our method in one synthetic scenario, and two practical applications, including the distributed audio sampling in binaural hearing aids and the efficient estimation of room impulse responses. The numerical results confirm the effectiveness and robustness of the proposed algorithm in both synthetic and practical setups.
Intelligent control of a stepping motor drive using a hybrid neuro-fuzzy ANFIS approach Stepping motors are widely used in robotics and in the numerical control of machine tools where they have to perform high-precision positioning operations. However, the variations of the mechanical configuration of the drive, which are common to these two applications, can lead to a loss of synchronism for high stepping rates. Moreover, the classical open-loop speed control is weak and a closed-loop control becomes necessary. In this paper, fuzzy logic is applied to control the speed of a stepping motor drive with feedback. A neuro-fuzzy hybrid approach is used to design the fuzzy rule base of the intelligent system for control. In particular, we used the adaptive neuro-fuzzy inference system (ANFIS) methodology to build a Sugeno fuzzy model for controlling the stepping motor drive. An advanced test bed is used in order to evaluate the tracking properties and the robustness capacities of the fuzzy logic controller.
Numerical and symbolic approaches to uncertainty management in AI Dealing with uncertainty is part of most intelligent behaviour and therefore techniques for managing uncertainty are a critical step in producing intelligent behaviour in machines. This paper discusses the concept of uncertainty and approaches that have been devised for its management in AI and expert systems. These are classified as quantitative (numeric) (Bayesian methods, Mycin's Certainty Factor model, the Dempster-Shafer theory of evidence and Fuzzy Set theory) or symbolic techniques (Nonmonotonic/Default Logics, Cohen's theory of Endorsements, and Fox's semantic approach). Each is discussed, illustrated, and assessed in relation to various criteria which illustrate the relative advantages and disadvantages of each technique. The discussion summarizes some of the criteria relevant to selecting the most appropriate uncertainty management technique for a particular application, emphasizes the differing functionality of the approaches, and outlines directions for future research. This includes combining qualitative and quantitative representations of information within the same application to facilitate different kinds of uncertainty management and functionality.
Bacterial Community Reconstruction Using A Single Sequencing Reaction Bacteria are the unseen majority on our planet, with millions of species and comprising most of the living protoplasm. While current methods enable in-depth study of a small number of communities, a simple tool for breadth studies of bacterial population composition in a large number of samples is lacking. We propose a novel approach for reconstruction of the composition of an unknown mixture of bacteria using a single Sanger-sequencing reaction of the mixture. This method is based on compressive sensing theory, which deals with reconstruction of a sparse signal using a small number of measurements. Utilizing the fact that in many cases each bacterial community is comprised of a small subset of the known bacterial species, we show the feasibility of this approach for determining the composition of a bacterial mixture. Using simulations, we show that sequencing a few hundred base-pairs of the 16S rRNA gene sequence may provide enough information for reconstruction of mixtures containing tens of species, out of tens of thousands, even in the presence of realistic measurement noise. Finally, we show initial promising results when applying our method for the reconstruction of a toy experimental mixture with five species. Our approach may have a potential for a practical and efficient way for identifying bacterial species compositions in biological samples.
1.242667
0.121333
0.121333
0.060667
0.037714
0.015758
0.003048
0.00068
0.000048
0.000001
0
0
0
0
Approximate Sparsity Pattern Recovery: Information-Theoretic Lower Bounds Recovery of the sparsity pattern (or support) of an unknown sparse vector from a small number of noisy linear measurements is an important problem in compressed sensing. In this paper, the high-dimensional setting is considered. It is shown that if the measurement rate and per-sample signal-to-noise ratio (SNR) are finite constants independent of the length of the vector, then the optimal sparsity pattern estimate will have a constant fraction of errors. Lower bounds on the measurement rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector. The tightness of the bounds in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing achievable bounds. Near optimality is shown for a wide variety of practically motivated signal models.
Thresholded Basis Pursuit: LP Algorithm for Order-Wise Optimal Support Recovery for Sparse and Approximately Sparse Signals From Noisy Random Measurements In this paper we present a linear programming solution for sign pattern recovery of a sparse signal from noisy random projections of the signal. We consider two types of noise models, input noise, where noise enters before the random projection; and output noise, where noise enters after the random projection. Sign pattern recovery involves the estimation of sign pattern of a sparse signal. Our idea is to pretend that no noise exists and solve the noiseless $\ell_1$ problem, namely, $\min \|\beta\|_1 ~ s.t. ~ y=G \beta$ and quantizing the resulting solution. We show that the quantized solution perfectly reconstructs the sign pattern of a sufficiently sparse signal. Specifically, we show that the sign pattern of an arbitrary k-sparse, n-dimensional signal $x$ can be recovered with $SNR=\Omega(\log n)$ and measurements scaling as $m= \Omega(k \log{n/k})$ for all sparsity levels $k$ satisfying $0< k \leq \alpha n$, where $\alpha$ is a sufficiently small positive constant. Surprisingly, this bound matches the optimal \emph{Max-Likelihood} performance bounds in terms of $SNR$, required number of measurements, and admissible sparsity level in an order-wise sense. In contrast to our results, previous results based on LASSO and Max-Correlation techniques either assume significantly larger $SNR$, sublinear sparsity levels or restrictive assumptions on signal sets. Our proof technique is based on noisy perturbation of the noiseless $\ell_1$ problem, in that, we estimate the maximum admissible noise level before sign pattern recovery fails.
Compressed and Privacy-Sensitive Sparse Regression Recent research has studied the role of sparsity in high-dimensional regression and signal reconstruction, establishing theoretical limits for recovering sparse models. This line of work shows that lscr1 -regularized least squares regression can accurately estimate a sparse linear model from noisy examples in high dimensions. We study a variant of this problem where the original n input variables are compressed by a random linear transformation to m Lt n examples in p dimensions, and establish conditions under which a sparse linear model can be successfully recovered from the compressed data. A primary motivation for this compression procedure is to anonymize the data and preserve privacy by revealing little information about the original data. We characterize the number of projections that are required for lscr1 -regularized compressed regression to identify the nonzero coefficients in the true model with probability approaching one, a property called ldquosparsistence.rdquo We also show that lscr1 -regularized compressed regression asymptotically predicts as well as an oracle linear model, a property called ldquopersistence.rdquo Finally, we characterize the privacy properties of the compression procedure, establishing upper bounds on the mutual information between the compressed and uncompressed data that decay to zero.
Information Theoretic Bounds for Compressed Sensing In this paper, we derive information theoretic performance bounds to sensing and reconstruction of sparse phenomena from noisy projections. We consider two settings: output noise models where the noise enters after the projection and input noise models where the noise enters before the projection. We consider two types of distortion for reconstruction: support errors and mean-squared errors. Our goal is to relate the number of measurements, m, and SNR, to signal sparsity, k, distortion level, d, and signal dimension, n. We consider support errors in a worst-case setting. We employ different variations of Fano's inequality to derive necessary conditions on the number of measurements and SNR required for exact reconstruction. To derive sufficient conditions, we develop new insights on max-likelihood analysis based on a novel superposition property. In particular, this property implies that small support errors are the dominant error events. Consequently, our ML analysis does not suffer the conservatism of the union bound and leads to a tighter analysis of max-likelihood. These results provide order-wise tight bounds. For output noise models, we show that asymptotically an SNR of Θ(log(n)) together with Θ( k log(n/k)) measurements is necessary and sufficient for exact support recovery. Furthermore, if a small fraction of support errors can be tolerated, a constant SNR turns out to be sufficient in the linear sparsity regime. In contrast for input noise models, we show that support recovery fails if the number of measurements scales as o(n log(n)/SNR), implying poor compression performance for such cases. Motivated by the fact that the worst-case setup requires significantly high SNR and substantial number of measurements for input and output noise models, we consider a Bayesian setup. To derive necessary conditions, we develop novel extensions to Fano's inequality to handle continuous domains and arbitrary distortions. We then develop a new max-likelihood analysis over the set of rate distortion quantization points to characterize tradeoffs between mean-squared distortion and the number of measurements using rate-distortion theory. We show that with constant SNR the number of measurements scales linearly with the rate-distortion function of the sparse phenomena.
Information-Theoretic Limits on Sparse Signal Recovery: Dense versus Sparse Measurement Matrices We study the information-theoretic limits of exactly recovering the support set of a sparse signal, using noisy projections defined by various classes of measurement matrices. Our analysis is high-dimensional in nature, in which the number of observations n, the ambient signal dimension p, and the signal sparsity k are all allowed to tend to infinity in a general manner. This paper makes two novel contributions. First, we provide sharper necessary conditions for exact support recovery using general (including non-Gaussian) dense measurement matrices. Combined with previously known sufficient conditions, this result yields sharp characterizations of when the optimal decoder can recover a signal for various scalings of the signal sparsity k and sample size n, including the important special case of linear sparsity (k = ¿(p)) using a linear scaling of observations (n = ¿(p)). Our second contribution is to prove necessary conditions on the number of observations n required for asymptotically reliable recovery using a class of ¿-sparsified measurement matrices, where the measurement sparsity parameter ¿(n, p, k) ¿ (0,1] corresponds to the fraction of nonzero entries per row. Our analysis allows general scaling of the quadruplet (n, p, k, ¿) , and reveals three different regimes, corresponding to whether measurement sparsity has no asymptotic effect, a minor effect, or a dramatic effect on the information-theoretic limits of the subset recovery problem.
Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ) error term combined with a sparseness-inducing regularization term. Basis pursuit, the least absolute shrinkage and selection operato...
Statistical static timing analysis: how simple can we get? With an increasing trend in the variation of the primary parameters affecting circuit performance, the need for statistical static timing analysis (SSTA) has been firmly established in the last few years. While it is generally accepted that a timing analysis tool should handle parameter variations, the benefits of advanced SSTA algorithms are still questioned by the designer community because of their significant impact on complexity of STA flows. In this paper, we present convincing evidence that a path-based SSTA approach implemented as a post-processing step captures the effect of parameter variations on circuit performance fairly accurately. On a microprocessor block implemented in 90nm technology, the error in estimating the standard deviation of the timing margin at the inputs of sequential elements is at most 0.066 FO4 delays, which translates in to only 0.31% of worst case path delay.
Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with know locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing.
Multi-level Monte Carlo Finite Element method for elliptic PDEs with stochastic coefficients In Monte Carlo methods quadrupling the sample size halves the error. In simulations of stochastic partial differential equations (SPDEs), the total work is the sample size times the solution cost of an instance of the partial differential equation. A Multi-level Monte Carlo method is introduced which allows, in certain cases, to reduce the overall work to that of the discretization of one instance of the deterministic PDE. The model problem is an elliptic equation with stochastic coefficients. Multi-level Monte Carlo errors and work estimates are given both for the mean of the solutions and for higher moments. The overall complexity of computing mean fields as well as k-point correlations of the random solution is proved to be of log-linear complexity in the number of unknowns of a single Multi-level solve of the deterministic elliptic problem. Numerical examples complete the theoretical analysis.
Stereo image quality: effects of mixed spatio-temporal resolution We explored the response of the human visual system to mixed-resolution stereo video-sequences, in which one eye view was spatially or temporally low-pass filtered. It was expected that the perceived quality, depth, and sharpness would be relatively unaffected by low-pass filtering, compared to the case where both eyes viewed a filtered image. Subjects viewed two 10-second stereo video-sequences, in which the right-eye frames were filtered vertically (V) and horizontally (H) at 1/2 H, 1/2 V, 1/4 H, 1/4 V, 1/2 H 1/2 V, 1/2 H 1/4 V, 1/4 H 1/2 V, and 1/4 H 1/4 V resolution. Temporal filtering was implemented for a subset of these conditions at 1/2 temporal resolution, or with drop-and-repeat frames. Subjects rated the overall quality, sharpness, and overall sensation of depth. It was found that spatial filtering produced acceptable results: the overall sensation of depth was unaffected by low-pass filtering, while ratings of quality and of sharpness were strongly weighted towards the eye with the greater spatial resolution. By comparison, temporal filtering produced unacceptable results: field averaging and drop-and-repeat frame conditions yielded images with poor quality and sharpness, even though perceived depth was relatively unaffected. We conclude that spatial filtering of one channel of a stereo video-sequence may be an effective means of reducing the transmission bandwidth
Compressive sampling for streaming signals with sparse frequency content Compressive sampling (CS) has emerged as significant signal processing framework to acquire and reconstruct sparse signals at rates significantly below the Nyquist rate. However, most of the CS development to-date has focused on finite-length signals and representations. In this paper we discuss a streaming CS framework and greedy reconstruction algorithm, the Stream- ing Greedy Pursuit (SGP), to reconstruct signals with sparse frequency content. Our proposed sampling framework and the SGP are explicitly intended for streaming applications and signals of unknown length. The measurement framework we propose is designed to be causal and im- plementable using existing hardware architectures. Furthermore, our reconstruction algorithm provides specific computational guarantees, which makes it appropriate for real-time system im- plementations. Our experiment results on very long signals demonstrate the good performance of the SGP and validate our approach.
QoE Aware Service Delivery in Distributed Environment Service delivery and customer satisfaction are strongly related items for a correct commercial management platform. Technical aspects targeting this issue relate to QoS parameters that can be handled by the platform, at least partially. Subjective psychological issues and human cognitive aspects are typically unconsidered aspects and they directly determine the Quality of Experience (QoE). These factors finally have to be considered as key input for a successful business operation between a customer and a company. In our work, a multi-disciplinary approach is taken to propose a QoE interaction model based on the theoretical results from various fields including pyschology, cognitive sciences, sociology, service ecosystem and information technology. In this paper a QoE evaluator is described for assessing the service delivery in a distributed and integrated environment on per user and per service basis.
New Type-2 Rule Ranking Indices for Designing Parsimonious Interval Type-2 Fuzzy Logic Systems In this paper, we propose two novel indices for type-2 fuzzy rule ranking to identify the most influential fuzzy rules in designing type-2 fuzzy logic systems, and name them as R-values and c-values of fuzzy rules separately. The R-values of type-2 fuzzy rules are obtained by applying QR decomposition in which there is no need to estimate a rank as required in the SVD-QR with column pivoting algorithm. The c-values of type-2 fuzzy rules are suggested to rank rules based on the effects of rule consequents. Experimental results on a signal recovery problem have shown that by using the proposed indices the most influential type-2 fuzzy rules can be effectively selected to construct parsimonious type-2 fuzzy models while the system performances are kept at a satisfied level.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.1
0.05
0.033333
0.014286
0.007692
0.000667
0
0
0
0
0
0
0
0
Video quality estimator for wireless mesh networks As Wireless Mesh Networks (WMNs) have been increasingly deployed, where users can share, create and access videos with different characteristics, the need for new quality estimator mechanisms has become important because operators want to control the quality of video delivery and optimize their network resources, while increasing the user satisfaction. However, the development of in-service Quality of Experience (QoE) estimation schemes for Internet videos (e.g., real-time streaming and gaming) with different complexities, motions, Group of Picture (GoP) sizes and contents remains a significant challenge and is crucial for the success of wireless multimedia systems. To address this challenge, we propose a real-time quality estimator approach, HyQoE, for real-time multimedia applications. The performance evaluation in a WMN scenario demonstrates the high accuracy of HyQoE in estimating the Mean Opinion Score (MOS). Moreover, the results highlight the lack of performance of the well-known objective methods and the Pseudo-Subjective Quality Assessment (PSQA) approach.
A Large-scale Study of Wikipedia Users' Quality of Experience The Web is one of the most successful Internet application. Yet, the quality of Web users' experience is still largely impenetrable. Whereas Web performances are typically gathered with controlled experiments, in this work we perform a large-scale study of one of the most popular websites,namely Wikipedia, explicitly asking (a small fraction of its) users for feedback on the browsing experience. We leverage user survey responses to build a data-driven model of user satisfaction which, despite including state-of-the art quality of experience metrics, is still far from achieving accurate results, and discuss directions to move forward. Finally, we aim at making our dataset publicly available, which hopefully contributes in enriching and refining the scientific community knowledge on Web users' quality of experience (QoE).
QoE-based packet dropper controllers for multimedia streaming in WiMAX networks. The proliferation of broadband wireless facilities, together with the demand for multimedia applications, are creating a wireless multimedia era. In this scenario, the key requirement is the delivery of multimedia content with Quality of Service (QoS) and Quality of Experience (QoE) support for thousands of users (and access networks) in broadband in the wireless systems of the next generation.. This paper sets out new QoE-aware packet controller mechanisms to keep video streaming applications at an acceptable level of quality in Worldwide Interoperability for Microwave Access (WiMAX) networks. In periods of congestion, intelligent packet dropper mechanisms for IEEE 802.16 systems are triggered to drop packets in accordance with their impact on user perception, intra-frame dependence, Group of Pictures (GoP) and available wireless resources in service classes. The simulation results show that the proposed solutions reduce the impact of multimedia flows on the user's experience and optimize wireless network resources in periods of congestion.. The benefits of the proposed schemes were evaluted in a simulated WiMAX QoS/QoE environment, by using the following well-known QoE metrics: Peak Signal-to-Noise Ratio (PSNR), Video Quality Metric (VQM), Structural Similarity Index (SSIM) and Mean Option Score (MOS).
QoE-based packet drop control for 3D-video streaming over wireless networks. Currently, the Internet is experiencing a revolution in the provisioning and demand of multimedia services. In this context, 3D-video is envisioned to attract more and more the multimedia market with the perspective for enhanced applications (video surveillance, mission critical control, entertainment, etc.). However, 3D-video content delivery places increased bandwidth demands, as well as have rigorous Quality of Service (QoS) and Quality of Experience (QoE) requirements for efficient support. This paper proposes a novel QoE-aware packet controller mechanism, which aims at connecting real-time 3D-video streaming applications over congested wireless networks with acceptable levels of QoS and QoE over the time. Simulation experiments were carried out to show the impact and benefit of our proposal. Subjective and objective QoE metrics were used for benchmarking, showing that in congestion periods our solution improves the quality level of real-time 3D-video sequences from the user point-of-view, as well as, optimizes the usage of wireless network resources.
Frame concealment algorithm for stereoscopic video using motion vector sharing Stereoscopic video is one of the simplest forms of multi view video, which can be easily adapted for communication applications. Much current research is based on colour and depth map stereoscopic video, due to its reduced bandwidth requirements and backward compatibility. Existing immersive media research is more focused on application processing than aspects related to transfer of immersive content over communication channels. As video over packet networks is affected by missing frames, caused by packet loss, this paper proposes a frame concealment method for colour and depth map based stereoscopic video. The proposed method exploits the motion correlation of colour and depth map image sequences. The colour motion information is reused for prediction during depth map coding. The redundant motion information is then used to conceal transmission errors at the decoder. The experimental results show that the proposed frame concealment scheme performs better than applying error concealment for colour and depth map video separately in a range of packet error conditions.
Video Transport Evaluation With H.264 Video Traces. The performance evaluation of video transport mechanisms becomes increasingly important as encoded video accounts for growing portions of the network traffic. Compared to the widely studied MPEG-4 encoded video, the recently adopted H.264 video coding standards include novel mechanisms, such as hierarchical B frame prediction structures and highly efficient quality scalable coding, that have impor...
Traffic Monitoring and Analysis, Second International Workshop, TMA 2010, Zurich, Switzerland, April 7, 2010, Proceedings
A near optimal QoE-driven power allocation scheme for SVC-based video transmissions over MIMO systems In this paper, we propose a near optimal power allocation scheme, which maximizes the quality of experience (QoE), for scalable video coding (SVC) based video transmissions over multi-input multi-output (MIMO) systems. This scheme tries to optimize the received video quality according to video frame-error-rate (FER), which may be caused by either transmission errors in physical (PHY) layer or video coding structures in application (APP) layer. Due to the complexity of the original optimization problem, we decompose it into several sub-problems, which can then be solved by classic convex optimization methods. Detailed algorithms with corresponding theoretical derivations are provided. Simulations with real video traces demonstrate the effectiveness of our proposed scheme.
Quantification of YouTube QoE via Crowdsourcing This paper addresses the challenge of assessing and modeling Quality of Experience (QoE) for online video services that are based on TCP-streaming. We present a dedicated QoE model for You Tube that takes into account the key influence factors (such as stalling events caused by network bottlenecks) that shape quality perception of this service. As second contribution, we propose a generic subjective QoE assessment methodology for multimedia applications (like online video) that is based on crowd sourcing - a highly cost-efficient, fast and flexible way of conducting user experiments. We demonstrate how our approach successfully leverages the inherent strengths of crowd sourcing while addressing critical aspects such as the reliability of the experimental data obtained. Our results suggest that, crowd sourcing is a highly effective QoE assessment method not only for online video, but also for a wide range of other current and future Internet applications.
The effects of multiview depth video compression on multiview rendering This article investigates the interaction between different techniques for depth compression and view synthesis rendering with multiview video plus scene depth data. Two different approaches for depth coding are compared, namely H.264/MVC, using temporal and inter-view reference images for efficient prediction, and the novel platelet-based coding algorithm, characterized by being adapted to the special characteristics of depth-images. Since depth-images are a 2D representation of the 3D scene geometry, depth-image errors lead to geometry distortions. Therefore, the influence of geometry distortions resulting from coding artifacts is evaluated for both coding approaches in two different ways. First, the variation of 3D surface meshes is analyzed using the Hausdorff distance and second, the distortion is evaluated for 2D view synthesis rendering, where color and depth information are used together to render virtual intermediate camera views of the scene. The results show that-although its rate-distortion (R-D) performance is worse-platelet-based depth coding outperforms H.264, due to improved sharp edge preservation. Therefore, depth coding needs to be evaluated with respect to geometry distortions.
Uncertainty measures for interval type-2 fuzzy sets Fuzziness (entropy) is a commonly used measure of uncertainty for type-1 fuzzy sets. For interval type-2 fuzzy sets (IT2 FSs), centroid, cardinality, fuzziness, variance and skewness are all measures of uncertainties. The centroid of an IT2 FS has been defined by Karnik and Mendel. In this paper, the other four concepts are defined. All definitions use a Representation Theorem for IT2 FSs. Formulas for computing the cardinality, fuzziness, variance and skewness of an IT2 FS are derived. These definitions should be useful in IT2 fuzzy logic systems design using the principles of uncertainty, and in measuring the similarity between two IT2 FSs.
Statistical Timing Analysis Considering Spatially and Temporally Correlated Dynamic Power Supply Noise Power supply noise is having increasingly more influence on timing, even though noise-aware timing analysis has not yet been fully established, because of several difficulties such as its dependence on input vectors and dynamic behavior. This paper proposes static timing analysis that takes power supply noise into consideration where the dependence of noise on input vectors and spatial and temporal correlations are handled statistically. We construct a statistical model of power supply voltage that dynamically varies with spatial and temporal correlation, and represent it as a set of uncorrelated variables. We demonstrate that power-voltage variations are highly correlated and adopting principal component analysis as an orthogonalization technique can effectively reduce the number of variables. Experiments confirmed the validity of our model and the accuracy of timing analysis. We also discuss the accuracy and CPU time in association with the reduced number of variables.
Virus propagation with randomness. Viruses are organisms that need to infect a host cell in order to reproduce. The new viruses leave the infected cell and look for other susceptible cells to infect. The mathematical models for virus propagation are very similar to population and epidemic models, and involve a relatively large number of parameters. These parameters are very difficult to establish with accuracy, while variability in the cell and virus populations and measurement errors are also to be expected. To deal with this issue, we consider the parameters to be random variables with given distributions. We use a non-intrusive variant of the polynomial chaos method to obtain statistics from the differential equations of two different virus models. The equations to be solved remain the same as in the deterministic case; thus no new computer codes need to be developed. Some examples are presented.
Granular Association Rules for Multiple Taxonomies: A Mass Assignment Approach The use of hierarchical taxonomies to organise information (or sets of objects) is a common approach for the semantic web and elsewhere, and is based on progressively finer granulations of objects. In many cases, seemingly crisp granulation disguises the fact that categories are based on loosely defined concepts that are better modelled by allowing graded membership. A related problem arises when different taxonomies are used, with different structures, as the integration process may also lead to fuzzy categories. Care is needed when information systems use fuzzy sets to model graded membership in categories - the fuzzy sets are not disjunctive possibility distributions, but must be interpreted conjunctively. We clarify this distinction and show how an extended mass assignment framework can be used to extract relations between fuzzy categories. These relations are association rules and are useful when integrating multiple information sources categorised according to different hierarchies. Our association rules do not suffer from problems associated with use of fuzzy cardinalities. Experimental results on discovering association rules in film databases and terrorism incident databases are demonstrated.
1.055775
0.06
0.055424
0.02757
0.018475
0.00302
0.001067
0.000331
0.000115
0.000007
0
0
0
0
Counter braids: a novel counter architecture for per-flow measurement Fine-grained network measurement requires routers and switches to update large arrays of counters at very high link speed (e.g. 40 Gbps). A naive algorithm needs an infeasible amount of SRAM to store both the counters and a flow-to-counter association rule, so that arriving packets can update corresponding counters at link speed. This has made accurate per-flow measurement complex and expensive, and motivated approximate methods that detect and measure only the large flows. This paper revisits the problem of accurate per-flow measurement. We present a counter architecture, called Counter Braids, inspired by sparse random graph codes. In a nutshell, Counter Braids "compresses while counting". It solves the central problems (counter space and flow-to-counter association) of per-flow measurement by "braiding" a hierarchy of counters with random graphs. Braiding results in drastic space reduction by sharing counters among flows; and using random graphs generated on-the-fly with hash functions avoids the storage of flow-to-counter association. The Counter Braids architecture is optimal (albeit with a complex decoder) as it achieves the maximum compression rate asymptotically. For implementation, we present a low-complexity message passing decoding algorithm, which can recover flow sizes with essentially zero error. Evaluation on Internet traces demonstrates that almost all flow sizes are recovered exactly with only a few bits of counter space per flow.
Fundamental limits of almost lossless analog compression In Shannon theory, lossless source coding deals with the optimal compression of discrete sources. Compressed sensing is a lossless coding strategy for analog sources by means of multiplication by real-valued matrices. In this paper we study almost lossless analog compression for analog memoryless sources in an information-theoretic framework, in which the compressor is not constrained to linear transformations but it satisfies various regularity conditions such as Lipschitz continuity. The fundamental limit is shown to be the information dimension proposed by Renyi in 1959.
Performance bounds on compressed sensing with Poisson noise This paper describes performance bounds for compressed sensing in the presence of Poisson noise when the underlying signal, a vector of Poisson intensities, is sparse or compressible (admits a sparse approximation). The signal-independent and bounded noise models used in the literature to analyze the performance of compressed sensing do not accurately model the effects of Poisson noise. However, Poisson noise is an appropriate noise model for a variety of applications, including low-light imaging, where sensing hardware is large or expensive, and limiting the number of measurements collected is important. In this paper, we describe how a feasible positivity-preserving sensing matrix can be constructed, and then analyze the performance of a compressed sensing reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which could be used as a measure of signal sparsity.
On the Iterative Decoding of High-Rate LDPC Codes With Applications in Compressed Sensing This paper considers the performance of $(j,k)$-regular low-density parity-check (LDPC) codes with message-passing (MP) decoding algorithms in the high-rate regime. In particular, we derive the high-rate scaling law for MP decoding of LDPC codes on the binary erasure channel (BEC) and the $q$-ary symmetric channel ($q$-SC). For the BEC, the density evolution (DE) threshold of iterative decoding scales like $\Theta(k^{-1})$ and the critical stopping ratio scales like $\Theta(k^{-j/(j-2)})$. For the $q$-SC, the DE threshold of verification decoding depends on the details of the decoder and scales like $\Theta(k^{-1})$ for one decoder. Using the fact that coding over large finite alphabets is very similar to coding over the real numbers, the analysis of verification decoding is also extended to the the compressed sensing (CS) of strictly-sparse signals. A DE based approach is used to analyze the CS systems with randomized-reconstruction guarantees. This leads to the result that strictly-sparse signals can be reconstructed efficiently with high-probability using a constant oversampling ratio (i.e., when the number of measurements scales linearly with the sparsity of the signal). A stopping-set based approach is also used to get stronger (e.g., uniform-in-probability) reconstruction guarantees.
Combinatorial Sublinear-Time Fourier Algorithms We study the problem of estimating the best k term Fourier representation for a given frequency sparse signal (i.e., vector) A of length N≫k. More explicitly, we investigate how to deterministically identify k of the largest magnitude frequencies of $\hat{\mathbf{A}}$, and estimate their coefficients, in polynomial(k,log N) time. Randomized sublinear-time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem (Gilbert et al. in ACM STOC, pp. 152–161, 2002; Proceedings of SPIE Wavelets XI, 2005). In this paper we develop the first known deterministic sublinear-time sparse Fourier Transform algorithm which is guaranteed to produce accurate results. As an added bonus, a simple relaxation of our deterministic Fourier result leads to a new Monte Carlo Fourier algorithm with similar runtime/sampling bounds to the current best randomized Fourier method (Gilbert et al. in Proceedings of SPIE Wavelets XI, 2005). Finally, the Fourier algorithm we develop here implies a simpler optimized version of the deterministic compressed sensing method previously developed in (Iwen in Proc. of ACM-SIAM Symposium on Discrete Algorithms (SODA’08), 2008).
Approximate Sparse Recovery: Optimizing Time and Measurements A Euclidean approximate sparse recovery system consists of parameters $k,N$, an $m$-by-$N$ measurement matrix, $\bm{\Phi}$, and a decoding algorithm, $\mathcal{D}$. Given a vector, ${\mathbf x}$, the system approximates ${\mathbf x}$ by $\widehat {\mathbf x}=\mathcal{D}(\bm{\Phi} {\mathbf x})$, which must satisfy $|\widehat {\mathbf x} - {\mathbf x}|_2\le C |{\mathbf x} - {\mathbf x}_k|_2$, where ${\mathbf x}_k$ denotes the optimal $k$-term approximation to ${\mathbf x}$. (The output $\widehat{\mathbf x}$ may have more than $k$ terms.) For each vector ${\mathbf x}$, the system must succeed with probability at least 3/4. Among the goals in designing such systems are minimizing the number $m$ of measurements and the runtime of the decoding algorithm, $\mathcal{D}$. In this paper, we give a system with $m=O(k \log(N/k))$ measurements—matching a lower bound, up to a constant factor—and decoding time $k\log^{O(1)} N$, matching a lower bound up to a polylog$(N)$ factor. We also consider the encode time (i.e., the time to multiply $\bm{\Phi}$ by $x$), the time to update measurements (i.e., the time to multiply $\bm{\Phi}$ by a 1-sparse $x$), and the robustness and stability of the algorithm (resilience to noise before and after the measurements). Our encode and update times are optimal up to $\log(k)$ factors. The columns of $\bm{\Phi}$ have at most $O(\log^2(k)\log(N/k))$ nonzeros, each of which can be found in constant time. Our full result, a fully polynomial randomized approximation scheme, is as follows. If ${\mathbf x}={\mathbf x}_k+\nu_1$, where $\nu_1$ and $\nu_2$ (below) are arbitrary vectors (regarded as noise), then setting $\widehat {\mathbf x} = \mathcal{D}(\Phi {\mathbf x} + \nu_2)$, and for properly normalized $\bm{\Phi}$, we get $\left|{\mathbf x} - \widehat {\mathbf x}\right|_2^2 \le (1+\epsilon)\left|\nu_1\right|_2^2 + \epsilon\left|\nu_2\right|_2^2$ using $O((k/\epsilon)\log(N/k))$ measurements and $(k/\epsilon)\log^{O(1)}(N)$ time for decoding.
Decay Properties of Restricted Isometry Constants Abstract—Many,sparse,approximation,algorithms,accurately recover,the sparsest solution to an underdetermined,system,of equations,provided,the matrix’s restricted isometry,constants (RICs) satisfy certain bounds. There are no known,large deter- ministic matrices that satisfy the desired RIC bounds; however, members,of many,random,matrix ensembles,typically satisfy RIC bounds. This experience,with random,matrices,has colored the view of the RICs’ behavior. By modifying,matrices,assumed,to have bounded RICs, we construct matrices whose RICs behave in a markedly,different fashion than the classical random,matrices; RICs can,satisfy desirable bounds,and,also take on values in a narrow,range. Index Terms—Compressed sensing, restricted isometry con-
Compressive-projection principal component analysis. Principal component analysis (PCA) is often central to dimensionality reduction and compression in many applications, yet its data-dependent nature as a transform computed via expensive eigendecomposition often hinders its use in severely resource-constrained settings such as satellite-borne sensors. A process is presented that effectively shifts the computational burden of PCA from the resource-constrained encoder to a presumably more capable base-station decoder. The proposed approach, compressive-projection PCA (CPPCA), is driven by projections at the sensor onto lower-dimensional subspaces chosen at random, while the CPPCA decoder, given only these random projections, recovers not only the coefficients associated with the PCA transform, but also an approximation to the PCA transform basis itself. An analysis is presented that extends existing Rayleigh-Ritz theory to the special case of highly eccentric distributions; this analysis in turn motivates a reconstruction process at the CPPCA decoder that consists of a novel eigenvector reconstruction based on a convex-set optimization driven by Ritz vectors within the projected subspaces. As such, CPPCA constitutes a fundamental departure from traditional PCA in that it permits its excellent dimensionality-reduction and compression performance to be realized in an light-encoder/heavy-decoder system architecture. In experimental results, CPPCA outperforms a multiple-vector variant of compressed sensing for the reconstruction of hyperspectral data.
Recovering Sparse Signals Using Sparse Measurement Matrices in Compressed DNA Microarrays Microarrays (DNA, protein, etc.) are massively parallel affinity-based biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microar- rays comprising tens of thousands of probe spots are currently being employed to test multitude of targets in a single experiment. In conventional microarrays, each spot contains a large number of copies of a single probe designed to capture a single target, and hence collects only a single data point. This is a wasteful use of the sensing resources in comparative DNA microarray experiments, where a test sample is measured relative to a reference sample. Typically, only a fraction of the total number of genes represented by the two samples is differentially expressed, and thus a vast number of probe spots may not provide any useful infor- mation. To this end we propose an alternative design, the so-called compressed microarrays, wherein each spot contains copies of several different probes and the total number of spots is potentially much smaller than the number of targets being tested. Fewer spots directly translates to significantly lower costs due to cheaper array manufacturing, simpler image acquisition and processing, and smaller amount of genomic material needed for experiments. To recover signals from compressed microarray measurements, we lever- age ideas from compressive sampling. For sparse measurement matrices, we propose an algorithm that has significantly lower computational complexity than the widely-used linear-programming-based methods, and can also recover signals with less sparsity.
Generalized Krylov recycling methods for solution of multiple related linear equation systems in electromagnetic analysis In this paper we propose methods for fast iterative solution of multiple related linear systems of equations. Such systems arise, for example, in building pattern libraries for interconnect parasitic extraction, parasitic extraction under process variation, and parameterized interconnect characterization. Our techniques are based on a generalized form of "recycled" Krylov subspace methods that use sharing of information between related systems of equations to accelerate the iterative solution. Experimental results on electromagnetics problems demonstrate that the proposed method can achieve a speed-up of 5X~30X compared to direct GMRES applied sequentially to the individual systems. These methods are generic, fully treat nonlinear perturbations without approximation, and can be applied in a wide variety of application domains outside electromagnetics.
Statistical analysis of subthreshold leakage current for VLSI circuits We develop a method to estimate the variation of leakage current due to both intra-die and inter-die gate length process variability. We derive an analytical expression to estimate the probability density function (PDF) of the leakage current for stacked devices found in CMOS gates. These distributions of individual gate leakage currents are then combined to obtain the mean and variance of the leakage current for an entire circuit. We also present an approach to account for both the inter- and intra-die gate length variations to ensure that the circuit leakage PDF correctly models both types of variation. The proposed methods were implemented and tested on a number of benchmark circuits. Comparison to Monte Carlo simulation validates the accuracy of the proposed method and demonstrates the efficiency of the proposed analysis method. Comparison with traditional deterministic leakage current analysis demonstrates the need for statistical methods for leakage current analysis.
Topological approaches to covering rough sets Rough sets, a tool for data mining, deal with the vagueness and granularity in information systems. This paper studies covering-based rough sets from the topological view. We explore the topological properties of this type of rough sets, study the interdependency between the lower and the upper approximation operations, and establish the conditions under which two coverings generate the same lower approximation operation and the same upper approximation operation. Lastly, axiomatic systems for the lower approximation operation and the upper approximation operation are constructed.
Tree-Structured Compressive Sensing with Variational Bayesian Analysis In compressive sensing (CS) the known structure in the transform coefficients may be leveraged to improve reconstruction accuracy. We here develop a hierarchical statistical model applicable to both wavelet and JPEG-based DCT bases, in which the tree structure in the sparseness pattern is exploited explicitly. The analysis is performed efficiently via variational Bayesian (VB) analysis, and compar...
An Interval-Valued Intuitionistic Fuzzy Rough Set Model Given a widespread interest in rough sets as being applied to various tasks of data analysis it is not surprising at all that we have witnessed a wave of further generalizations and algorithmic enhancements of this original concept. This paper proposes an interval-valued intuitionistic fuzzy rough model by means of integrating the classical Pawlak rough set theory with the interval-valued intuitionistic fuzzy set theory. Firstly, some concepts and properties of interval-valued intuitionistic fuzzy set and interval-valued intuitionistic fuzzy relation are introduced. Secondly, a pair of lower and upper interval-valued intuitionistic fuzzy rough approximation operators induced from an interval-valued intuitionistic fuzzy relation is defined, and some properties of approximation operators are investigated in detail. Furthermore, by introducing cut sets of interval-valued intuitionistic fuzzy sets, classical representations of interval-valued intuitionistic fuzzy rough approximation operators are presented. Finally, the connections between special interval-valued intuitionistic fuzzy relations and interval-valued intuitionistic fuzzy rough approximation operators are constructed, and the relationships of this model and the others rough set models are also examined.
1.044153
0.030249
0.024132
0.016663
0.006672
0.002859
0.000414
0.000101
0.000028
0
0
0
0
0
On the Iterative Decoding of High-Rate LDPC Codes With Applications in Compressed Sensing This paper considers the performance of $(j,k)$-regular low-density parity-check (LDPC) codes with message-passing (MP) decoding algorithms in the high-rate regime. In particular, we derive the high-rate scaling law for MP decoding of LDPC codes on the binary erasure channel (BEC) and the $q$-ary symmetric channel ($q$-SC). For the BEC, the density evolution (DE) threshold of iterative decoding scales like $\Theta(k^{-1})$ and the critical stopping ratio scales like $\Theta(k^{-j/(j-2)})$. For the $q$-SC, the DE threshold of verification decoding depends on the details of the decoder and scales like $\Theta(k^{-1})$ for one decoder. Using the fact that coding over large finite alphabets is very similar to coding over the real numbers, the analysis of verification decoding is also extended to the the compressed sensing (CS) of strictly-sparse signals. A DE based approach is used to analyze the CS systems with randomized-reconstruction guarantees. This leads to the result that strictly-sparse signals can be reconstructed efficiently with high-probability using a constant oversampling ratio (i.e., when the number of measurements scales linearly with the sparsity of the signal). A stopping-set based approach is also used to get stronger (e.g., uniform-in-probability) reconstruction guarantees.
Fundamental limits of almost lossless analog compression In Shannon theory, lossless source coding deals with the optimal compression of discrete sources. Compressed sensing is a lossless coding strategy for analog sources by means of multiplication by real-valued matrices. In this paper we study almost lossless analog compression for analog memoryless sources in an information-theoretic framework, in which the compressor is not constrained to linear transformations but it satisfies various regularity conditions such as Lipschitz continuity. The fundamental limit is shown to be the information dimension proposed by Renyi in 1959.
Counter braids: a novel counter architecture for per-flow measurement Fine-grained network measurement requires routers and switches to update large arrays of counters at very high link speed (e.g. 40 Gbps). A naive algorithm needs an infeasible amount of SRAM to store both the counters and a flow-to-counter association rule, so that arriving packets can update corresponding counters at link speed. This has made accurate per-flow measurement complex and expensive, and motivated approximate methods that detect and measure only the large flows. This paper revisits the problem of accurate per-flow measurement. We present a counter architecture, called Counter Braids, inspired by sparse random graph codes. In a nutshell, Counter Braids "compresses while counting". It solves the central problems (counter space and flow-to-counter association) of per-flow measurement by "braiding" a hierarchy of counters with random graphs. Braiding results in drastic space reduction by sharing counters among flows; and using random graphs generated on-the-fly with hash functions avoids the storage of flow-to-counter association. The Counter Braids architecture is optimal (albeit with a complex decoder) as it achieves the maximum compression rate asymptotically. For implementation, we present a low-complexity message passing decoding algorithm, which can recover flow sizes with essentially zero error. Evaluation on Internet traces demonstrates that almost all flow sizes are recovered exactly with only a few bits of counter space per flow.
LP decoding meets LP decoding: a connection between channel coding and compressed sensing This is a tale of two linear programming decoders, namely channel coding linear programming decoding (CC-LPD) and compressed sensing linear programming decoding (CS-LPD). So far, they have evolved quite independently. The aim of the present paper is to show that there is a tight connection between, on the one hand, CS-LPD based on a zero-one measurement matrix over the reals and, on the other hand, CC-LPD of the binary linear code that is obtained by viewing this measurement matrix as a binary parity-check matrix. This connection allows one to translate performance guarantees from one setup to the other.
Bayesian compressive sensing via belief propagation Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, sub-Nyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can complement conventional CS methods based on linear programming or greedy algorithms. We perform asymptotically optimal Bayesian inference using belief propagation (BP) decoding, which represents the CS encoding matrix as a graphical model. Fast computation is obtained by reducing the size of the graphical model with sparse encoding matrices. To decode a length-N signal containing K large coefficients, our CS-BP decoding algorithm uses O (K log(N))measurements and O(N log2(N)) computation. Finally, although we focus on a two-state mixture Gaussian model, CS-BP is easily adapted to other signal models.
Construction of a Large Class of Deterministic Sensing Matrices That Satisfy a Statistical Isometry Property Compressed Sensing aims to capture attributes of k-sparse signals using very few measurements. In the standard compressed sensing paradigm, the N ?? C measurement matrix ?? is required to act as a near isometry on the set of all k-sparse signals (restricted isometry property or RIP). Although it is known that certain probabilistic processes generate N ?? C matrices that satisfy RIP with high probability, there is no practical algorithm for verifying whether a given sensing matrix ?? has this property, crucial for the feasibility of the standard recovery algorithms. In contrast, this paper provides simple criteria that guarantee that a deterministic sensing matrix satisfying these criteria acts as a near isometry on an overwhelming majority of k-sparse signals; in particular, most such signals have a unique representation in the measurement domain. Probability still plays a critical role, but it enters the signal model rather than the construction of the sensing matrix. An essential element in our construction is that we require the columns of the sensing matrix to form a group under pointwise multiplication. The construction allows recovery methods for which the expected performance is sub-linear in C, and only quadratic in N, as compared to the super-linear complexity in C of the Basis Pursuit or Matching Pursuit algorithms; the focus on expected performance is more typical of mainstream signal processing than the worst case analysis that prevails in standard compressed sensing. Our framework encompasses many families of deterministic sensing matrices, including those formed from discrete chirps, Delsarte-Goethals codes, and extended BCH codes.
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f∈CN and a randomly chosen set of frequencies Ω. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=στ∈Tf(τ)δ(t-τ) obeying |T|≤CM·(log N)-1 · |Ω| for some constant CM0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N-M), f can be reconstructed exactly as the solution to the ℓ1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for CM which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|·logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N-M) would in general require a number of frequency samples at least proportional to |T|·logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
Bayesian Compressive Sensing The data of interest are assumed to be represented as N-dimensional real vectors, and these vectors are compressible in some linear basis B, implying that the signal can be reconstructed accurately using only a small number M Lt N of basis-function coefficients associated with B. Compressive sensing is a framework whereby one does not measure one of the aforementioned N-dimensional signals directly, but rather a set of related measurements, with the new measurements a linear combination of the original underlying N-dimensional signal. The number of required compressive-sensing measurements is typically much smaller than N, offering the potential to simplify the sensing system. Let f denote the unknown underlying N-dimensional signal, and g a vector of compressive-sensing measurements, then one may approximate f accurately by utilizing knowledge of the (under-determined) linear relationship between f and g, in addition to knowledge of the fact that f is compressible in B. In this paper we employ a Bayesian formalism for estimating the underlying signal f based on compressive-sensing measurements g. The proposed framework has the following properties: i) in addition to estimating the underlying signal f, "error bars" are also estimated, these giving a measure of confidence in the inverted signal; ii) using knowledge of the error bars, a principled means is provided for determining when a sufficient number of compressive-sensing measurements have been performed; iii) this setting lends itself naturally to a framework whereby the compressive sensing measurements are optimized adaptively and hence not determined randomly; and iv) the framework accounts for additive noise in the compressive-sensing measurements and provides an estimate of the noise variance. In this paper we present the underlying theory, an associated algorithm, example results, and provide comparisons to other compressive-sensing inversion algorithms in the literature.
Improved Bounds on Restricted Isometry Constants for Gaussian Matrices The restricted isometry constant (RIC) of a matrix $A$ measures how close to an isometry is the action of $A$ on vectors with few nonzero entries, measured in the $\ell^2$ norm. Specifically, the upper and lower RICs of a matrix $A$ of size $n\times N$ are the maximum and the minimum deviation from unity (one) of the largest and smallest, respectively, square of singular values of all ${N\choose k}$ matrices formed by taking $k$ columns from $A$. Calculation of the RIC is intractable for most matrices due to its combinatorial nature; however, many random matrices typically have bounded RIC in some range of problem sizes $(k,n,N)$. We provide the best known bound on the RIC for Gaussian matrices, which is also the smallest known bound on the RIC for any large rectangular matrix. Our results are built on the prior bounds of Blanchard, Cartis, and Tanner [SIAM Rev., to appear], with improvements achieved by grouping submatrices that share a substantial number of columns.
Low Rate Sampling Schemes for Time Delay Estimation Time delay estimation arises in many applications in which a multipath medium has to be identified from pulses transmitted through the channel. Various approaches have been proposed in the literature to identify time delays introduced by multipath environments. However, these methods either operate on the analog received signal, or require high sampling rates in order to achieve reasonable time resolution. In this paper, our goal is to develop a unified approach to time delay estimation from low rate samples of the output of a multipath channel. Our methods result in perfect recovery of the multipath delays from samples of the channel output at the lowest possible rate, even in the presence of overlapping transmitted pulses. This rate depends only on the number of multipath components and the transmission rate, but not on the bandwidth of the probing signal. In addition, our development allows for a variety of different sampling methods. By properly manipulating the low- rate samples, we show that the time delays can be recovered using the well-known ESPRIT algorithm. Combining results from sampling theory with those obtained in the context of direction of arrival estimation methods, we develop necessary and suffic ient conditions on the transmitted pulse and the sampling functions in order to ensure perfect recovery of the channel parameters at the minimal possible rate.
Stochastic Collocation Methods on Unstructured Grids in High Dimensions via Interpolation. In this paper we propose a method for conducting stochastic collocation on arbitrary sets of nodes. To accomplish this, we present the framework of least orthogonal interpolation, which allows one to construct interpolation polynomials based on arbitrarily located grids in arbitrary dimensions. These interpolation polynomials are constructed as a subspace of the family of orthogonal polynomials corresponding to the probability distribution function on stochastic space. This feature enables one to conduct stochastic collocation simulations in practical problems where one cannot adopt some popular node selections such as sparse grids or cubature nodes. We present in detail both the mathematical properties of the least orthogonal interpolation and its practical implementation algorithm. Numerical benchmark problems are also presented to demonstrate the efficacy of the method.
A note on compressed sensing and the complexity of matrix multiplication We consider the conjectured O(N^2^+^@e) time complexity of multiplying any two NxN matrices A and B. Our main result is a deterministic Compressed Sensing (CS) algorithm that both rapidly and accurately computes A@?B provided that the resulting matrix product is sparse/compressible. As a consequence of our main result we increase the class of matrices A, for any given NxN matrix B, which allows the exact computation of A@?B to be carried out using the conjectured O(N^2^+^@e) operations. Additionally, in the process of developing our matrix multiplication procedure, we present a modified version of Indyk's recently proposed extractor-based CS algorithm [P. Indyk, Explicit constructions for compressed sensing of sparse signals, in: SODA, 2008] which is resilient to noise.
Parallel Opportunistic Routing in Wireless Networks We study benefits of opportunistic routing in a large wireless ad hoc network by examining how the power, delay, and total throughput scale as the number of source–destination pairs increases up to the operating maximum. Our opportunistic routing is novel in a sense that it is massively parallel, i.e., it is performed by many nodes simultaneously to maximize the opportunistic gain while controlling the interuser interference. The scaling behavior of conventional multihop transmission that does not employ opportunistic routing is also examined for comparison. Our main results indicate that our opportunistic routing can exhibit a net improvement in overall power–delay tradeoff over the conventional routing by providing up to a logarithmic boost in the scaling law. Such a gain is possible since the receivers can tolerate more interference due to the increased received signal power provided by the multi user diversity gain, which means that having more simultaneous transmissions is possible.
Some general comments on fuzzy sets of type-2 This paper contains some general comments on the algebra of truth values of fuzzy sets of type 2. It details the precise mathematical relationship with the algebras of truth values of ordinary fuzzy sets and of interval-valued fuzzy sets. Subalgebras of the algebra of truth values and t-norms on them are discussed. There is some discussion of finite type-2 fuzzy sets. © 2008 Wiley Periodicals, Inc.
1.033183
0.018929
0.016663
0.013875
0.006428
0.002321
0.000365
0.000067
0.000015
0
0
0
0
0
An Experimental QoE Performance Study for the Efficient Transmission of High Demanding Traffic over an Ad Hoc Network Using BATMAN. Multimedia communications are attracting great attention from the research, industry, and end-user communities. The latter are increasingly claiming for higher levels of quality and the possibility of consuming multimedia content from a plethora of devices at their disposal. Clearly, the most appealing gadgets are those that communicate wirelessly to access these services. However, current wireless technologies raise severe concerns to support extremely demanding services such as real-time multimedia transmissions. This paper evaluates from QoE and QoS perspectives the capability of the ad hoc routing protocol called BATMAN to support Voice over IP and video traffic. To this end, two test-benches were proposed, namely, a real (emulated) testbed and a simulation framework. Additionally, a series of modifications was proposed on both protocols' parameters settings and video-stream characteristics that contributes to further improving the multimedia quality perceived by the users. The performance of the well-extended protocol OLSR is also evaluated in detail to compare it with BATMAN. From the results, a notably high correlation between real experimentation and computer simulation outcomes was observed. It was also found out that, with the proper configuration, BATMAN is able to transmit several QCIF video-streams and VoIP calls with high quality. In addition, BATMAN outperforms OLSR supporting multimedia traffic in both experimental and simulated environments.
Real-Time QoE Monitoring System for Video Streaming Services with Adaptive Media Playout AbstractQuality of Experience (QoE) of video streaming services has been attracting more and more attention recently. Therefore, in this work we designed and implemented a real-time QoE monitoring system for streaming services with Adaptive Media Playout (AMP), which was implemented into the VideoLAN Client (VLC) media player to dynamically adjust the playout rate of videos according to the buffer fullness of the client buffer. The QoE monitoring system reports the QoE of streaming services in real time so that network/content providers can monitor the qualities of their services and resolve troubles immediately whenever their subscribers encounter them. Several experiments including wired and wireless streaming were conducted to show the effectiveness of the implemented AMP and QoE monitoring system. Experimental results demonstrate that AMP significantly improves the QoE of streaming services according to the Mean Opinion Score (MOS) estimated by our developed program. Additionally, some challenging issues in wireless streaming have been easily identified using the developed QoE monitoring system.
A system testbed for modeling encrypted video-streaming service performance indicators based on TCP/IP metrics For cellular operators, estimating the end-user experience from network measurements is a challenging task. For video-streaming service, several analytical models have been proposed to estimate user opinion from buffering metrics. However, there remains the problem of estimating these buffering metrics from the limited set of measurements available on a per-connection basis for encrypted video services. In this paper, a system testbed is presented for automatically constructing a simple, albeit accurate, Quality-of-Experience (QoE) model for encrypted video-streaming services in a wireless network. The testbed consists of a terminal agent, a network-level emulator, and Probe software, which are used to compare end-user and network-level measurements. For illustration purposes, the testbed is used to derive the formulas to compute video performance metrics from TCP/IP metrics for encrypted YouTube traffic in a Wi-Fi network. The resulting formulas, which would be the core of a video-streaming QoE model, are also applicable to cellular networks, as the test campaign fully covers typical mobile network conditions and the formulas are partly validated in a real LTE network.
QoE model for video delivered over an LTE network using HTTP adaptive streaming The end user quality of experience QoE of content delivered over a radio network is mainly influenced by the radio parameters in the radio access net-work. This paper will present a QoE model for video delivered over a radio network e.g., Long Term Evolution LTE using HTTP Hypertext Transfer Protocol adaptive streaming HAS. The model is based on experiments performed in the context of the Next Generation Mobile Networks NGMN project P-SERQU Project Service Quality Definition and Measurement. In the first phase, a set of representative HAS profiles were selected based on a lab experiment where scenarios with typical radio impairments fading, signal-to-interference-plus-noise ratio, round trip time and competing traffic were investigated in a test network. Based on these HAS profiles, video files were prepared by concatenating chunks of the corresponding video quality. In a second phase, these video files were downloaded, viewed, and rated by a large number of volunteers. Based on these user scores a mean opinion score MOS was determined for each of the video files, and hence, the HAS profiles. Several QoE models that predict the MOS from the HAS profile have been analyzed. Using the preferred QoE model, a range of MOS values can be obtained for each set of initial radio impairments. It is argued that a QoE model based on the radio parameters is necessarily less accurate than a QoE model based on HAS profiles and an indication is given of how much the performance of the former is less than the latter. © 2014 Alcatel-Lucent.
An algorithm for pronominal anaphora resolution This paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors (reflexives and reciprocals). The algorithm applies to the syntactic representations generated by McCord's Slot Grammar parser and relies on salience measures derived from syntactic structure and a simple dynamic model of attentional state. Like the parser, the algorithm is implemented in Prolog. The authors have tested it extensively on computer manual texts and conducted a blind test on manual text containing 360 pronoun occurrences. The algorithm successfully identifies the antecedent of the pronoun for 86% of these pronoun occurrences. The relative contributions of the algorithm's components to its overall success rate in this blind test are examined. Experiments were conducted with an enhancement of the algorithm that contributes statistically modelled information concerning semantic and real-world relations to the algorithm's decision procedure. Interestingly, this enhancement only marginally improves the algorithm's performance (by 2%). The algorithm is compared with other approaches to anaphora resolution that have been proposed in the literature. In particular, the search procedure of Hobbs' algorithm was implemented in the Slot Grammar framework and applied to the sentences in teh blind test set. The authors' algorithm achieves a higher rate of success (4%) than Hobbs' algorithm. The relation of the algorithm to the centering approach is discussed, as well as to models of anaphora resolution that invoke a variety of informational factors in ranking antecedent candidates.
Statistical Timing Analysis Considering Spatial Correlations using a Single Pert-Like Traversal We present an efficient statistical timing analysis algorithm thatpredicts the probability distribution of the circuit delay while incorporatingthe effects of spatial correlations of intra-die parametervariations, using a method based on principal component analysis.The method uses a PERT-like circuit graph traversal, and hasa run-time that is linear in the number of gates and interconnects,as well as the number of grid partitions used to model spatial correlations.On average, the mean and standard deviation valuescomputed by our method have errors of 0.2% and 0.9%, respectively,in comparison with a Monte Carlo simulation.
Fuzzy logic in control systems: fuzzy logic controller. I.
Compressed Remote Sensing of Sparse Objects The linear inverse source and scattering problems are studied from the perspective of compressed sensing. By introducing the sensor as well as target ensembles, the maximum number of recoverable targets is proved to be at least proportional to the number of measurement data modulo a log-square factor with overwhelming probability. Important contributions include the discoveries of the threshold aperture, consistent with the classical Rayleigh criterion, and the incoherence effect induced by random antenna locations. The predictions of theorems are confirmed by numerical simulations.
A Bayesian approach to image expansion for improved definition. Accurate image expansion is important in many areas of image analysis. Common methods of expansion, such as linear and spline techniques, tend to smooth the image data at edge regions. This paper introduces a method for nonlinear image expansion which preserves the discontinuities of the original image, producing an expanded image with improved definition. The maximum a posteriori (MAP) estimation techniques that are proposed for noise-free and noisy images result in the optimization of convex functionals. The expanded images produced from these methods will be shown to be aesthetically and quantitatively superior to images expanded by the standard methods of replication, linear interpolation, and cubic B-spline expansion.
A simple Cooperative diversity method based on network path selection Cooperative diversity has been recently proposed as a way to form virtual antenna arrays that provide dramatic gains in slow fading wireless environments. However, most of the proposed solutions require distributed space-time coding algorithms, the careful design of which is left for future investigation if there is more than one cooperative relay. We propose a novel scheme that alleviates these problems and provides diversity gains on the order of the number of relays in the network. Our scheme first selects the best relay from a set of M available relays and then uses this "best" relay for cooperation between the source and the destination. We develop and analyze a distributed method to select the best relay that requires no topology information and is based on local measurements of the instantaneous channel conditions. This method also requires no explicit communication among the relays. The success (or failure) to select the best available path depends on the statistics of the wireless channel, and a methodology to evaluate performance for any kind of wireless channel statistics, is provided. Information theoretic analysis of outage probability shows that our scheme achieves the same diversity-multiplexing tradeoff as achieved by more complex protocols, where coordination and distributed space-time coding for M relay nodes is required, such as those proposed by Laneman and Wornell (2003). The simplicity of the technique allows for immediate implementation in existing radio hardware and its adoption could provide for improved flexibility, reliability, and efficiency in future 4G wireless systems.
Using polynomial chaos to compute the influence of multiple random surfers in the PageRank model The PageRank equation computes the importance of pages in a web graph relative to a single random surfer with a constant teleportation coefficient. To be globally relevant, the teleportation coefficient should account for the influence of all users. Therefore, we correct the PageRank formulation by modeling the teleportation coefficient as a random variable distributed according to user behavior. With this correction, the PageRank values themselves become random. We present two methods to quantify the uncertainty in the random PageRank: a Monte Carlo sampling algorithm and an algorithm based the truncated polynomial chaos expansion of the random quantities. With each of these methods, we compute the expectation and standard deviation of the PageRanks. Our statistical analysis shows that the standard deviation of the PageRanks are uncorrelated with the PageRank vector.
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
A model to perform knowledge-based temporal abstraction over multiple signals In this paper we propose the Multivariable Fuzzy Temporal Profile model (MFTP), which enables the projection of expert knowledge on a physical system over a computable description. This description may be used to perform automatic abstraction on a set of parameters that represent the temporal evolution of the system. This model is based on the constraint satisfaction problem (CSP)formalism, which enables an explicit representation of the knowledge, and on fuzzy set theory, from which it inherits the ability to model the imprecision and uncertainty that are characteristic of human knowledge vagueness. We also present an application of the MFTP model to the recognition of landmarks in mobile robotics, specifically to the detection of doors on ultrasound sensor signals from a Nomad 200 robot.
Implementing Competitive Learning in a Quantum System Ideas from quantum computation are applied to the field of neural networks to produce competitive learning in a quantum system. The resulting quantum competitive learner has a prototype storage capacity that is exponentially greater than that of its classical counterpart. Further, empirical results from simulation of the quantum competitive learning system on real-world data sets demonstrate the quantum system's potential for excellent performance.
1.2
0.2
0.2
0.05
0
0
0
0
0
0
0
0
0
0
A Data-Driven Stochastic Method for Elliptic PDEs with Random Coefficients We propose a data-driven stochastic method (DSM) to study stochastic partial differential equations (SPDEs) in the multiquery setting. An essential ingredient of the proposed method is to construct a data-driven stochastic basis under which the stochastic solutions to the SPDEs enjoy a compact representation for a broad range of forcing functions and/or boundary conditions. Our method consists of offline and online stages. A data-driven stochastic basis is computed in the offline stage using the Karhunen-Lo` eve (KL) expansion. A two-level preconditioning optimization approach and a randomized SVD algorithm are used to reduce the offline computational cost. In the online stage, we solve a relatively small number of coupled deterministic PDEs by projecting the stochastic solution into the data-driven stochastic basis constructed offline. Compared with a generalized polynomial chaos method (gPC), the ratio of the computational complexities between DSM (online stage) and gPC is of order O((m/N-p)(2)). Here m and N-p are the numbers of elements in the basis used in DSM and gPC, respectively. Typically we expect m << N-p when the effective dimension of the stochastic solution is small. A timing model, which takes into account the offline computational cost of DSM, is constructed to demonstrate the efficiency of DSM. Applications of DSM to stochastic elliptic problems show considerable computational savings over traditional methods even with a small number of queries. We also provide a method for an a posteriori error estimate and error correction.
A dynamically bi-orthogonal method for time-dependent stochastic partial differential equations I: Derivation and algorithms We propose a dynamically bi-orthogonal method (DyBO) to solve time dependent stochastic partial differential equations (SPDEs). The objective of our method is to exploit some intrinsic sparse structure in the stochastic solution by constructing the sparsest representation of the stochastic solution via a bi-orthogonal basis. It is well-known that the Karhunen-Loeve expansion (KLE) minimizes the total mean squared error and gives the sparsest representation of stochastic solutions. However, the computation of the KL expansion could be quite expensive since we need to form a covariance matrix and solve a large-scale eigenvalue problem. The main contribution of this paper is that we derive an equivalent system that governs the evolution of the spatial and stochastic basis in the KL expansion. Unlike other reduced model methods, our method constructs the reduced basis on-the-fly without the need to form the covariance matrix or to compute its eigendecomposition. In the first part of our paper, we introduce the derivation of the dynamically bi-orthogonal formulation for SPDEs, discuss several theoretical issues, such as the dynamic bi-orthogonality preservation and some preliminary error analysis of the DyBO method. We also give some numerical implementation details of the DyBO methods, including the representation of stochastic basis and techniques to deal with eigenvalue crossing. In the second part of our paper [11], we will present an adaptive strategy to dynamically remove or add modes, perform a detailed complexity analysis, and discuss various generalizations of this approach. An extensive range of numerical experiments will be provided in both parts to demonstrate the effectiveness of the DyBO method.
A dynamically bi-orthogonal method for time-dependent stochastic partial differential equations II: Adaptivity and generalizations This is part II of our paper in which we propose and develop a dynamically bi-orthogonal method (DyBO) to study a class of time-dependent stochastic partial differential equations (SPDEs) whose solutions enjoy a low-dimensional structure. In part I of our paper [9], we derived the DyBO formulation and proposed numerical algorithms based on this formulation. Some important theoretical results regarding consistency and bi-orthogonality preservation were also established in the first part along with a range of numerical examples to illustrate the effectiveness of the DyBO method. In this paper, we focus on the computational complexity analysis and develop an effective adaptivity strategy to add or remove modes dynamically. Our complexity analysis shows that the ratio of computational complexities between the DyBO method and a generalized polynomial chaos method (gPC) is roughly of order O((m/N"p)^3) for a quadratic nonlinear SPDE, where m is the number of mode pairs used in the DyBO method and N"p is the number of elements in the polynomial basis in gPC. The effective dimensions of the stochastic solutions have been found to be small in many applications, so we can expect m is much smaller than N"p and computational savings of our DyBO method against gPC are dramatic. The adaptive strategy plays an essential role for the DyBO method to be effective in solving some challenging problems. Another important contribution of this paper is the generalization of the DyBO formulation for a system of time-dependent SPDEs. Several numerical examples are provided to demonstrate the effectiveness of our method, including the Navier-Stokes equations and the Boussinesq approximation with Brownian forcing.
A Sparse Composite Collocation Finite Element Method for Elliptic SPDEs. This work presents a stochastic collocation method for solving elliptic PDEs with random coefficients and forcing term which are assumed to depend on a finite number of random variables. The method consists of a hierarchic wavelet discretization in space and a sequence of hierarchic collocation operators in the probability domain to approximate the solution's statistics. The selection of collocation points is based on a Smolyak construction of zeros of orthogonal polynomials with respect to the probability density function of each random input variable. A sparse composition of levels of spatial refinements and stochastic collocation points is then proposed and analyzed, resulting in a substantial reduction of overall degrees of freedom. Like in the Monte Carlo approach, the algorithm results in solving a number of uncoupled, purely deterministic elliptic problems, which allows the integration of existing fast solvers for elliptic PDEs. Numerical examples on two-dimensional domains will then demonstrate the superiority of this sparse composite collocation finite element method compared to the “full composite” collocation finite element method and the Monte Carlo method.
A least-squares approximation of partial differential equations with high-dimensional random inputs Uncertainty quantification schemes based on stochastic Galerkin projections, with global or local basis functions, and also stochastic collocation methods in their conventional form, suffer from the so called curse of dimensionality: the associated computational cost grows exponentially as a function of the number of random variables defining the underlying probability space of the problem. In this paper, to overcome the curse of dimensionality, a low-rank separated approximation of the solution of a stochastic partial differential (SPDE) with high-dimensional random input data is obtained using an alternating least-squares (ALS) scheme. It will be shown that, in theory, the computational cost of the proposed algorithm grows linearly with respect to the dimension of the underlying probability space of the system. For the case of an elliptic SPDE, an a priori error analysis of the algorithm is derived. Finally, different aspects of the proposed methodology are explored through its application to some numerical experiments.
The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L^2 error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods.
A flexible numerical approach for quantification of epistemic uncertainty In the field of uncertainty quantification (UQ), epistemic uncertainty often refers to the kind of uncertainty whose complete probabilistic description is not available, largely due to our lack of knowledge about the uncertainty. Quantification of the impacts of epistemic uncertainty is naturally difficult, because most of the existing stochastic tools rely on the specification of the probability distributions and thus do not readily apply to epistemic uncertainty. And there have been few studies and methods to deal with epistemic uncertainty. A recent work can be found in [J. Jakeman, M. Eldred, D. Xiu, Numerical approach for quantification of epistemic uncertainty, J. Comput. Phys. 229 (2010) 4648-4663], where a framework for numerical treatment of epistemic uncertainty was proposed. The method is based on solving an encapsulation problem, without using any probability information, in a hypercube that encapsulates the unknown epistemic probability space. If more probabilistic information about the epistemic variables is known a posteriori, the solution statistics can then be evaluated at post-process steps. In this paper, we present a new method, similar to that of Jakeman et al. but significantly extending its capabilities. Most notably, the new method (1) does not require the encapsulation problem to be in a bounded domain such as a hypercube; (2) does not require the solution of the encapsulation problem to converge point-wise. In the current formulation, the encapsulation problem could reside in an unbounded domain, and more importantly, its numerical approximation could be sought in L^p norm. These features thus make the new approach more flexible and amicable to practical implementation. Both the mathematical framework and numerical analysis are presented to demonstrate the effectiveness of the new approach.
Efficient Solvers for a Linear Stochastic Galerkin Mixed Formulation of Diffusion Problems with Random Data We introduce a stochastic Galerkin mixed formulation of the steady-state diffusion equation and focus on the efficient iterative solution of the saddle-point systems obtained by combining standard finite element discretizations with two distinct types of stochastic basis functions. So-called mean-based preconditioners, based on fast solvers for scalar diffusion problems, are introduced for use with the minimum residual method. We derive eigenvalue bounds for the preconditioned system matrices and report on the efficiency of the chosen preconditioning schemes with respect to all the discretization parameters.
Mixed aleatory-epistemic uncertainty quantification with stochastic expansions and optimization-based interval estimation Uncertainty quantification (UQ) is the process of determining the effect of input uncertainties on response metrics of interest. These input uncertainties may be characterized as either aleatory uncertainties, which are irreducible variabilities inherent in nature, or epistemic uncertainties, which are reducible uncertainties resulting from a lack of knowledge. When both aleatory and epistemic uncertainties are mixed, it is desirable to maintain a segregation between aleatory and epistemic sources such that it is easy to separate and identify their contributions to the total uncertainty. Current production analyses for mixed UQ employ the use of nested sampling, where each sample taken from epistemic distributions at the outer loop results in an inner loop sampling over the aleatory probability distributions. This paper demonstrates new algorithmic capabilities for mixed UQ in which the analysis procedures are more closely tailored to the requirements of aleatory and epistemic propagation. Through the combination of stochastic expansions for computing statistics and interval optimization for computing bounds, interval-valued probability, second-order probability, and Dempster–Shafer evidence theory approaches to mixed UQ are shown to be more accurate and efficient than previously achievable.
Parameterized timing analysis with general delay models and arbitrary variation sources Many recent techniques for timing analysis under variability, in which delay is an explicit function of underlying parameters, may be described as parameterized timing analysis. The "max" operator, used repeatedly during block-based timing analysis, causes several complications during parameterized timing analysis. We introduce bounds on, and an approximation to, the max operator which allow us to develop an accurate, general, and efficient approach to parameterized timing, which can handle either uncertain or random variations. Applied to random variations, the approach is competitive with existing statistical static timing analysis (SSTA) techniques, in that it allows for nonlinear delay models and arbitrary distributions. Applied to uncertain variations, the method is competitive with existing multi-corner STA techniques, in that it more reliably reproduces overall circuit sensitivity to variations. Crucially, this technique can also be applied to the mixed case where both random and uncertain variations are considered. Our results show that, on average, circuit delay is predicted with less than 2% error for multi-corner analysis, and less than 1% error for SSTA.
Wireless Communication
Accelerated iterative hard thresholding The iterative hard thresholding algorithm (IHT) is a powerful and versatile algorithm for compressed sensing and other sparse inverse problems. The standard IHT implementation faces several challenges when applied to practical problems. The step-size and sparsity parameters have to be chosen appropriately and, as IHT is based on a gradient descend strategy, convergence is only linear. Whilst the choice of the step-size can be done adaptively as suggested previously, this letter studies the use of acceleration methods to improve convergence speed. Based on recent suggestions in the literature, we show that a host of acceleration methods are also applicable to IHT. Importantly, we show that these modifications not only significantly increase the observed speed of the method, but also satisfy the same strong performance guarantees enjoyed by the original IHT method.
Progressive design methodology for complex engineering systems based on multiobjective genetic algorithms and linguistic decision making This work focuses on a design methodology that aids in design and development of complex engineering systems. This design methodology consists of simulation, optimization and decision making. Within this work a framework is presented in which modelling, multi-objective optimization and multi criteria decision making techniques are used to design an engineering system. Due to the complexity of the designed system a three-step design process is suggested. In the first step multi-objective optimization using genetic algorithm is used. In the second step a multi attribute decision making process based on linguistic variables is suggested in order to facilitate the designer to express the preferences. In the last step the fine tuning of selected few variants are performed. This methodology is named as progressive design methodology. The method is applied as a case study to design a permanent magnet brushless DC motor drive and the results are compared with experimental values.
Stochastic Behavioral Modeling and Analysis for Analog/Mixed-Signal Circuits It has become increasingly challenging to model the stochastic behavior of analog/mixed-signal (AMS) circuits under large-scale process variations. In this paper, a novel moment-matching-based method has been proposed to accurately extract the probabilistic behavioral distributions of AMS circuits. This method first utilizes Latin hypercube sampling coupling with a correlation control technique to generate a few samples (e.g., sample size is linear with number of variable parameters) and further analytically evaluate the high-order moments of the circuit behavior with high accuracy. In this way, the arbitrary probabilistic distributions of the circuit behavior can be extracted using moment-matching method. More importantly, the proposed method has been successfully applied to high-dimensional problems with linear complexity. The experiments demonstrate that the proposed method can provide up to 1666X speedup over crude Monte Carlo method for the same accuracy.
1.034562
0.006061
0.005556
0.004169
0.002251
0.000585
0.000222
0.000077
0.000007
0
0
0
0
0
Practical compressive sensing of large images Compressive imaging (CI) is a natural branch of compressed sensing (CS). One of the main difficulties in implementing CI is that, unlike many other CS applications, it involves huge amount of data. This data load has extensive implications for the complexity of the optical design, for the complexity of calibration, for data storage requirements. As a result, practical CI implementations are mostly limited to relative small image sizes. Recently we have shown that it is possible to overcome these problems by using a separable imaging operator. We have demonstrated that separable imaging operator permits CI of megapixel size images and we derived a theoretical bound for oversampling factor requirements. Here we further elaborate the tradeoff of using separable imaging operator, present and discuss additional experimental results.
Compressed Imaging With a Separable Sensing Operator Compressive imaging (CI) is a natural branch of compressed sensing (CS). Although a number of CI implementations have started to appear, the design of efficient CI system still remains a challenging problem. One of the main difficulties in implementing CI is that it involves huge amounts of data, which has far-reaching implications for the complexity of the optical design, calibration, data storag...
Kronecker compressive sensing. Compressive sensing (CS) is an emerging approach for the acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1-D signals and 2-D images, many important applications involve multidimensional signals; the construction of sparsifying bases and measurement systems for such signals is complicated by their higher dimensionality. In this paper, we propose the use of Kronecker product matrices in CS for two purposes. First, such matrices can act as sparsifying bases that jointly model the structure present in all of the signal dimensions. Second, such matrices can represent the measurement protocols used in distributed settings. Our formulation enables the derivation of analytical bounds for the sparse approximation of multidimensional signals and CS recovery performance, as well as a means of evaluating novel distributed measurement schemes.
Tensor-Train Decomposition A simple nonrecursive form of the tensor decomposition in $d$ dimensions is presented. It does not inherently suffer from the curse of dimensionality, it has asymptotically the same number of parameters as the canonical decomposition, but it is stable and its computation is based on low-rank approximation of auxiliary unfolding matrices. The new form gives a clear and convenient way to implement all basic operations efficiently. A fast rounding procedure is presented, as well as basic linear algebra operations. Examples showing the benefits of the decomposition are given, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator.
Computational Methods for Sparse Solution of Linear Inverse Problems The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications.
Single-Pixel Imaging via Compressive Sampling In this article, the authors present a new approach to building simpler, smaller, and cheaper digital cameras that can operate efficiently across a broader spectral range than conventional silicon-based cameras. The approach fuses a new camera architecture based on a digital micromirror device with the new mathematical theory and algorithms of compressive sampling.
Greed is good: algorithmic results for sparse approximation This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho's basis pursuit (BP) paradigm can recover the optimal representation of an exactly sparse signal. It leverages this theory to show that both OMP and BP succeed for every sparse input signal from a wide class of dictionaries. These quasi-incoherent dictionaries offer a natural generalization of incoherent dictionaries, and the cumulative coherence function is introduced to quantify the level of incoherence. This analysis unifies all the recent results on BP and extends them to OMP. Furthermore, the paper develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal. From there, it argues that OMP is an approximation algorithm for the sparse problem over a quasi-incoherent dictionary. That is, for every input signal, OMP calculates a sparse approximant whose error is only a small factor worse than the minimal error that can be attained with the same number of terms.
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f∈CN and a randomly chosen set of frequencies Ω. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=στ∈Tf(τ)δ(t-τ) obeying |T|≤CM·(log N)-1 · |Ω| for some constant CM0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N-M), f can be reconstructed exactly as the solution to the ℓ1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for CM which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|·logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N-M) would in general require a number of frequency samples at least proportional to |T|·logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
FastSies: a fast stochastic integral equation solver for modeling the rough surface effect In this paper we describe several novel sparsification techniques used in a fast stochastic integral equation solver to compute the mean value and the variance of capacitance of 3D interconnects with random surface roughness. With the combination of these numerical techniques, the computational cost has been reduced from O(N/sup 4/) to O(Nlog/sup 2/(N)), where N is the number of panels used for the discretization of nominal smooth surfaces. Numerical experiments show that the proposed numerical techniques are accurate and efficient.
The Chebyshev expansion based passive model for distributed interconnect networks A new Chebyshev expansion based model for distributed interconnect networks is presented in this paper. Unlike the moment methods, this new model is optimal and it does not require the knowledge of expansion points. An automatic order selection scheme is also included in the new model. By using the integrated congruence transform, we guarantee the passivity of the new model for distributed interconnect networks. Because of the orthogonality of Chebyshev polynomials, the Modified Gram-Schmidt algorithm can be simplified. In the experimental examples, the new model is found to be accurate and efficient.
Compressed Sensing in Astronomy Recent advances in signal processing have focused on the use of sparse representations in various applications. A new field of interest based on sparsity has recently emerged: compressed sensing. This theory is a new sampling framework that provides an alternative to the well-known Shannon sampling theory. In this paper we investigate how compressed sensing (CS) can provide new insights into astronomical data compression and more generally how it paves the way for new conceptions in astronomical remote sensing. We first give a brief overview of the compressed sensing theory which provides very simple coding process with low computational cost, thus favoring its use for real-time applications often found on board space mission. We introduce a practical and effective recovery algorithm for decoding compressed data. In astronomy, physical prior information is often crucial for devising effective signal processing methods. We particularly point out that a CS-based compression scheme is flexible enough to account for such information. In this context, compressed sensing is a new framework in which data acquisition and data processing are merged. We show also that CS provides a new fantastic way to handle multiple observations of the same field view, allowing us to recover information at very low signal-to-noise ratio, which is impossible with standard compression methods. This CS data fusion concept could lead to an elegant and effective way to solve the problem ESA is faced with, for the transmission to the earth of the data collected by PACS, one of the instruments on board the Herschel spacecraft which will be launched in 2008.
Algorithms for distributed functional monitoring We study what we call functional monitoring problems. We have k players each tracking their inputs, say player i tracking a multiset Ai(t) up until time t, and communicating with a central coordinator. The coordinator's task is to monitor a given function f computed over the union of the inputs (iAi(t), continuously at all times t. The goal is to minimize the number of bits communicated between the players and the coordinator. A simple example is when f is the sum, and the coordinator is required to alert when the sum of a distributed set of values exceeds a given threshold . Of interest is the approximate version where the coordinator outputs 1 if f and 0 if f (1 ) . This denes the (k; f; ; ) distributed, functional monitoring problem. Functional monitoring problems are fundamental in dis- tributed systems, in particular sensor networks, where we must min- imize communication; they also connect to problems in communi- cation complexity, communication theory, and signal processing. Yet few formal bounds are known for functional monitoring. We give upper and lower bounds for the (k; f; ; ) problem for some of the basic f 's. In particular, we study frequency moments (F0; F1; F2). For F0 and F1, we obtain continuously monitoring algorithms with costs almost the same as their one-shot computation algorithms. However, for F2 the monitoring problem seems much harder. We give a carefully constructed multi-round algorithm that uses ìsketch summariesî at multiple levels of detail and solves the (k; F2; ; ) problem with communication ~ O(k2= + ( p k= )3). Since frequency moment estimation is central to other problems, our results have immediate applications to histograms, wavelet computations, and others. Our algorithmic techniques are likely to be useful for other functional monitoring problems as well.
On replacement models via a fuzzy set theoretic framework Uncertainty is present in virtually all replacement decisions due to unknown future events, such as revenue streams, maintenance costs, and inflation. Fuzzy sets provide a mathematical framework for explicitly incorporating imprecision into the decision making model, especially when the system involves human subjectivity. This paper illustrates the use of fuzzy sets and possibility theory to explicitly model uncertainty in replacement decisions via fuzzy variables and numbers. In particular, a fuzzy set approach to economic life of an asset calculation as well as a finite-horizon single asset replacement problem with multiple challengers is discussed. Because the use of triangular fuzzy numbers provides a compromise between computational efficiency and realistic modeling of the uncertainty, this discussion emphasizes fuzzy numbers. The algorithms used to determine the optimal replacement policy incorporate fuzzy arithmetic, dynamic programming (DP) with fuzzy rewards, the vertex method, and various ranking methods for fuzzy numbers. A brief history of replacement analysis, current conventional techniques, the basic concepts of fuzzy sets and possibility theory, and the advantages of the fuzzy generalization are also discussed
Using the Stochastic Collocation Method for the Uncertainty Quantification of Drug Concentration Due to Depot Shape Variability. Numerical simulations entail modeling assumptions that impact outcomes. Therefore, characterizing, in a probabilistic sense, the relationship between the variability of model selection and the variability of outcomes is important. Under certain assumptions, the stochastic collocation method offers a computationally feasible alternative to traditional Monte Carlo approaches for assessing the impact...
1.103074
0.026021
0.015164
0.008851
0.004249
0.001441
0.000658
0.000158
0.000039
0.000007
0
0
0
0
Alpha-Level Aggregation: A Practical Approach to Type-1 OWA Operation for Aggregating Uncertain Information with Applications to Breast Cancer Treatments Type-1 Ordered Weighted Averaging (OWA) operator provides us with a new technique for directly aggregating uncertain information with uncertain weights via OWA mechanism in soft decision making and data mining, in which uncertain objects are modeled by fuzzy sets. The Direct Approach to performing type-1 OWA operation involves high computational overhead. In this paper, we define a type-1 OWA operator based on the \alpha-cuts of fuzzy sets. Then, we prove a Representation Theorem of type-1 OWA operators, by which type-1 OWA operators can be decomposed into a series of \alpha-level type-1 OWA operators. Furthermore, we suggest a fast approach, called Alpha-Level Approach, to implementing the type-1 OWA operator. A practical application of type-1 OWA operators to breast cancer treatments is addressed. Experimental results and theoretical analyses show that: 1) the Alpha-Level Approach with linear order complexity can achieve much higher computing efficiency in performing type-1 OWA operation than the existing Direct Approach, 2) the type-1 OWA operators exhibit different aggregation behaviors from the existing fuzzy weighted averaging (FWA) operators, and 3) the type-1 OWA operators demonstrate the ability to efficiently aggregate uncertain information with uncertain weights in solving real-world soft decision-making problems.
A New Decision-Making Method for Stock Portfolio Selection Based on Computing with Linguistic Assessment
Checking and adjusting order-consistency of linguistic pairwise comparison matrices for getting transitive preference relations Linguistic pairwise comparison matrices for decision-making need to be (strongly) order-consistent, which means the judgments should be (strongly) transitive. We introduce the equivalent conditions of (strong) transitivity by using route matrices and digraphs and develop an adjustment procedure to help decision makers correcting the inconsistency. The conclusion can extend to other ordinal-scaled or numerical comparison matrices as well.
Type-Reduction Of General Type-2 Fuzzy Sets: The Type-1 Owa Approach For general type-2 fuzzy sets, the defuzzification process is very complex and the exhaustive direct method of implementing type-reduction is computationally expensive and turns out to be impractical. This has inevitably hindered the development of type-2 fuzzy inferencing systems in real-world applications. The present situation will not be expected to change, unless an efficient and fast method of deffuzzifying general type-2 fuzzy sets emerges. Type-1 ordered weighted averaging (OWA) operators have been proposed to aggregate expert uncertain knowledge expressed by type-1 fuzzy sets in decision making. In particular, the recently developed alpha-level approach to type-1 OWA operations has proven to be an effective tool for aggregating uncertain information with uncertain weights in real-time applications because its complexity is of linear order. In this paper, we prove that the mathematical representation of the type-reduced set (TRS) of a general type-2 fuzzy set is equivalent to that of a special case of type-1 OWA operator. This relationship opens up a new way of performing type reduction of general type-2 fuzzy sets, allowing the use of the alpha-level approach to type-1 OWA operations to compute the TRS of a general type-2 fuzzy set. As a result, a fast and efficient method of computing the centroid of general type-2 fuzzy sets is realized. The experimental results presented here illustrate the effectiveness of this method in conducting type reduction of different general type-2 fuzzy sets.
Intuitionistic hesitant linguistic sets and their application in multi-criteria decision-making problems. In practice, the use of linguistic information is flexible and definite due to the complexity of problems, and therefore the linguistic models have been widely studied and applied to solve multi-criteria decision-making (MCDM) problems under uncertainty. In this paper, intuitionistic hesitant linguistic sets (IHLSs) are defined on the basis of intuitionistic linguistic sets (ILSs) and hesitant fuzzy linguistic sets (HFLSs). As an evaluation value of one reference object, an intuitionistic hesitant linguistic number (IHLN) contains a linguistic term, a set of membership degrees and a set of non-membership degrees. There can be a consensus on the linguistic term and then decision makers can express their opinions on membership or non-membership degrees depending on their preferences. Therefore, by means of IHLSs, the flexibility in generating evaluation information under uncertainty can be achieved to a larger extent than either ILSs or HFLSs do. Besides, the basic operations and comparison method of IHLNs are studied, which is followed by the definitions of several aggregation operators, including the intuitionistic hesitant linguistic hybrid averaging (IHLHA) operator, the intuitionistic hesitant linguistic hybrid geometric (IHLHG) operator and the corresponding generalized operators. Using these operators, an approach to MCDM problems with intuitionistic hesitant linguistic information is proposed. Finally, an illustrative example is provided to verify the proposed approach and its accuracy and effectiveness have been demonstrated through the comparative analysis with ILSs and HFLSs.
On type-2 fuzzy relations and interval-valued type-2 fuzzy sets This paper introduces new operations on the algebra of fuzzy truth values, extended supremum and extended infimum, which are generalizations of the extended operations of maximum and minimum between fuzzy truth values for type-2 fuzzy sets, respectively. Using these new operations, the properties of type-2 fuzzy relations are discussed, especially the compositions of type-2 fuzzy relations. On this basis, this paper introduces interval-valued type-2 fuzzy sets and interval-valued type-2 fuzzy relations, and discusses their properties.
A method for multiple attribute decision making with incomplete weight information in linguistic setting The aim of this paper is to investigate the multiple attribute decision making problems with linguistic information, in which the information about attribute weights is incompletely known, and the attribute values take the form of linguistic variables. We first introduce some approaches to obtaining the weight information of attributes, and then establish an optimization model based on the ideal point of attribute values, by which the attribute weights can be determined. For the special situations where the information about attribute weights is completely unknown, we establish another optimization model. By solving this model, we get a simple and exact formula, which can be used to determine the attribute weights. We utilize the numerical weighting linguistic average (NWLA) operator to aggregate the linguistic variables corresponding to each alternative, and then rank the alternatives by means of the aggregated linguistic information. Finally, the developed method is applied to the ranking and selection of propulsion/manoeuvring system of a double-ended passenger ferry.
Fuzzy Grey Gm(1,1) Model Under Fuzzy System Grey GM(1, 1) forecasting model is a kind of short-term forecasting method which has been successfully applied in management and engineering problems with as little as four data. However, when a new system is constructed, the system is uncertain and variable so that the collected data is usually of fuzzy type, which could not be applied to grey GM(1, 1) model forecast. In order to cope with such problem, the fuzzy system derived from collected data is considered by the fuzzy grey controlled variable to derive a fuzzy grey GM(1, 1) model to forecast the extrapolative values under the fuzzy system. Finally, an example is described for illustration.
Computing with words for hierarchical competency based selection of personnel in construction companies As part of human resource management policies and practices, construction firms need to define competency requirements for project staff, and recruit the necessary team for completion of project assignments. Traditionally, potential candidates are interviewed and the most qualified are selected. Applicable methodologies that could take various candidate competencies and inherent uncertainties of human evaluation into consideration and then pinpoint the most qualified person with a high degree of reliability would be beneficial. In the last decade, computing with words (CWW) has been the center of attention of many researchers for its intrinsic capability of dealing with linguistic, vague, interdependent, and imprecise information under uncertain environments. This paper presents a CWW approach, based on the specific architecture of Perceptual Computer (Per-C) and the Linguistic Weighted Average (LWA), for competency based selection of human resources in construction firms. First, human resources are classified into two types of main personnel: project manager and engineer. Then, a hierarchical criteria structure for competency based evaluation of each main personnel category is established upon the available literature and survey. Finally, the perceptual computer approach is utilized to develop a practical model for competency based selection of personnel in construction companies. We believe that the proposed approach provides a useful tool to handle personnel selection problem in a more reliable and intelligent manner.
Evaluating the information quality of Web sites: A methodology based on fuzzy computing with words An evaluation methodology based on fuzzy computing with words aimed at measuring the information quality of Web sites containing documents is presented. This methodology is qualitative and user oriented because it generates linguistic recommendations on the information quality of the content-based Web sites based on users' perceptions. It is composed of two main components, an evaluation scheme to analyze the information quality of Web sites and a measurement method to generate the linguistic recommendations. The evaluation scheme is based on both technical criteria related to the Web site structure and criteria related to the content of information on the Web sites. It is user driven because the chosen criteria are easily understandable by the users, in such a way that Web visitors can assess them by means of linguistic evaluation judgments. The measurement method is user centered because it generates linguistic recommendations of the Web sites based on the visitors' linguistic evaluation judgments. To combine the linguistic evaluation judgments we introduce two new majority guided linguistic aggregation operators, the Majority guided Linguistic Induced Ordered Weighted Averaging (MLIOWA) and weighted MLIOWA operators, which generate the linguistic recommendations according to the majority of the evaluation judgments provided by different visitors. The use of this methodology could improve tasks such as information filtering and evaluation on the World Wide Web.
Genetic Learning Of Fuzzy Rule-Based Classification Systems Cooperating With Fuzzy Reasoning Methods In this paper, we present a multistage genetic learning process for obtaining linguistic fuzzy rule-based classification systems that integrates fuzzy reasoning methods cooperating with the fuzzy rule base and learns the best set of linguistic hedges for the linguistic variable terms. We show the application of the genetic learning process to two well known sample bases, and compare the results with those obtained from different learning algorithms. The results show the good behavior of the proposed method, which maintains the linguistic description of the fuzzy rules. (C) 1998 John Wiley & Sons, Inc.
Sparse Linear Representation This paper studies the question of how well a signal can be reprsented by a sparse linear combination of reference signals from an overcomplete dictionary. When the dictionary size is exponential in the dimension of signal, then the exact characterization of the optimal distortion is given as a function of the dictionary size exponent and the number of reference signals for the linear representation. Roughly speaking, every signal is sparse if the dictionary size is exponentially large, no matter how small the exponent is. Furthermore, an iterative method similar to matching pursuit that successively finds the best reference signal at each stage gives asymptotically optimal representations. This method is essentially equivalent to successive refinement for multiple descriptions and provides a simple alternative proof of the successive refinability of white Gaussian sources.
Compressive Acquisition of Dynamic Scenes Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models infeasible. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements, from which the image frames are then reconstructed. We exploit the low-dimensional dynamic parameters (the state sequence) and high-dimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the dynamic part of the scene at each instant and accumulates measurements over time to estimate the static parameters. This enables us to considerably lower the compressive measurement rate considerably. We validate our approach with a range of experiments including classification experiments that highlight the effectiveness of the proposed approach.
Analysis of frame-compatible subsampling structures for efficient 3DTV broadcast The evolution of the television market is led by 3DTV technology, and this tendency can accelerate during the next years according to expert forecasts. However, 3DTV delivery by broadcast networks is not currently developed enough, and acts as a bottleneck for the complete deployment of the technology. Thus, increasing interest is dedicated to stereo 3DTV formats compatible with current HDTV video equipment and infrastructure, as they may greatly encourage 3D acceptance. In this paper, different subsampling schemes for HDTV compatible transmission of both progressive and interlaced stereo 3DTV are studied and compared. The frequency characteristics and preserved frequency content of each scheme are analyzed, and a simple interpolation filter is specially designed. Finally, the advantages and disadvantages of the different schemes and filters are evaluated through quality testing on several progressive and interlaced video sequences.
1.015074
0.014286
0.014286
0.008316
0.007144
0.004762
0.00214
0.000967
0.000297
0.000096
0.000001
0
0
0
On nuclei and blocking sets in Desarguesian spaces A generalisation is given to recent results concerning the possible number of nuclei to a set of points in PG(n;q). As an application of this we obtain new lower bounds on the size of a t-fold blocking set of AG(n;q) in the case (t;q) > 1.
Polynomial multiplicities over finite fields and intersection sets
On intersection sets in desarguesian affine spaces Lower bounds on the size of t-fold blocking sets with respect to hyperplanes or t-intersection sets in AG(n;q) are obtained, some of which are sharp.
Covering finite fields with cosets of subspaces If V is a vector space over a finite field F, the minimum number of cosets of k-dimensional subspaces of V required to cover the nonzero points of V is established. This is done by first regarding V as a field extension of F and then associating with each coset L of a subspace of V a polynomial whose roots are the points of L. A covering with cosets is then equivalent to a product of such polynomials having the minimal polynomial satisfied by all nonzero points of V as a factor.
An algorithm for pronominal anaphora resolution This paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors (reflexives and reciprocals). The algorithm applies to the syntactic representations generated by McCord's Slot Grammar parser and relies on salience measures derived from syntactic structure and a simple dynamic model of attentional state. Like the parser, the algorithm is implemented in Prolog. The authors have tested it extensively on computer manual texts and conducted a blind test on manual text containing 360 pronoun occurrences. The algorithm successfully identifies the antecedent of the pronoun for 86% of these pronoun occurrences. The relative contributions of the algorithm's components to its overall success rate in this blind test are examined. Experiments were conducted with an enhancement of the algorithm that contributes statistically modelled information concerning semantic and real-world relations to the algorithm's decision procedure. Interestingly, this enhancement only marginally improves the algorithm's performance (by 2%). The algorithm is compared with other approaches to anaphora resolution that have been proposed in the literature. In particular, the search procedure of Hobbs' algorithm was implemented in the Slot Grammar framework and applied to the sentences in teh blind test set. The authors' algorithm achieves a higher rate of success (4%) than Hobbs' algorithm. The relation of the algorithm to the centering approach is discussed, as well as to models of anaphora resolution that invoke a variety of informational factors in ranking antecedent candidates.
Statistical Timing Analysis Considering Spatial Correlations using a Single Pert-Like Traversal We present an efficient statistical timing analysis algorithm thatpredicts the probability distribution of the circuit delay while incorporatingthe effects of spatial correlations of intra-die parametervariations, using a method based on principal component analysis.The method uses a PERT-like circuit graph traversal, and hasa run-time that is linear in the number of gates and interconnects,as well as the number of grid partitions used to model spatial correlations.On average, the mean and standard deviation valuescomputed by our method have errors of 0.2% and 0.9%, respectively,in comparison with a Monte Carlo simulation.
Fuzzy logic in control systems: fuzzy logic controller. I.
Compressed Remote Sensing of Sparse Objects The linear inverse source and scattering problems are studied from the perspective of compressed sensing. By introducing the sensor as well as target ensembles, the maximum number of recoverable targets is proved to be at least proportional to the number of measurement data modulo a log-square factor with overwhelming probability. Important contributions include the discoveries of the threshold aperture, consistent with the classical Rayleigh criterion, and the incoherence effect induced by random antenna locations. The predictions of theorems are confirmed by numerical simulations.
A Bayesian approach to image expansion for improved definition. Accurate image expansion is important in many areas of image analysis. Common methods of expansion, such as linear and spline techniques, tend to smooth the image data at edge regions. This paper introduces a method for nonlinear image expansion which preserves the discontinuities of the original image, producing an expanded image with improved definition. The maximum a posteriori (MAP) estimation techniques that are proposed for noise-free and noisy images result in the optimization of convex functionals. The expanded images produced from these methods will be shown to be aesthetically and quantitatively superior to images expanded by the standard methods of replication, linear interpolation, and cubic B-spline expansion.
A simple Cooperative diversity method based on network path selection Cooperative diversity has been recently proposed as a way to form virtual antenna arrays that provide dramatic gains in slow fading wireless environments. However, most of the proposed solutions require distributed space-time coding algorithms, the careful design of which is left for future investigation if there is more than one cooperative relay. We propose a novel scheme that alleviates these problems and provides diversity gains on the order of the number of relays in the network. Our scheme first selects the best relay from a set of M available relays and then uses this "best" relay for cooperation between the source and the destination. We develop and analyze a distributed method to select the best relay that requires no topology information and is based on local measurements of the instantaneous channel conditions. This method also requires no explicit communication among the relays. The success (or failure) to select the best available path depends on the statistics of the wireless channel, and a methodology to evaluate performance for any kind of wireless channel statistics, is provided. Information theoretic analysis of outage probability shows that our scheme achieves the same diversity-multiplexing tradeoff as achieved by more complex protocols, where coordination and distributed space-time coding for M relay nodes is required, such as those proposed by Laneman and Wornell (2003). The simplicity of the technique allows for immediate implementation in existing radio hardware and its adoption could provide for improved flexibility, reliability, and efficiency in future 4G wireless systems.
Using polynomial chaos to compute the influence of multiple random surfers in the PageRank model The PageRank equation computes the importance of pages in a web graph relative to a single random surfer with a constant teleportation coefficient. To be globally relevant, the teleportation coefficient should account for the influence of all users. Therefore, we correct the PageRank formulation by modeling the teleportation coefficient as a random variable distributed according to user behavior. With this correction, the PageRank values themselves become random. We present two methods to quantify the uncertainty in the random PageRank: a Monte Carlo sampling algorithm and an algorithm based the truncated polynomial chaos expansion of the random quantities. With each of these methods, we compute the expectation and standard deviation of the PageRanks. Our statistical analysis shows that the standard deviation of the PageRanks are uncorrelated with the PageRank vector.
Using trapezoids for representing granular objects: Applications to learning and OWA aggregation We discuss the role and benefits of using trapezoidal representations of granular information. We focus on the use of level sets as a tool for implementing many operations on trapezoidal sets. We point out the simplification that the linearity of the trapezoid brings by requiring us to perform operations on only two level sets. We investigate the classic learning algorithm in the case when our observations are granule objects represented as trapezoidal fuzzy sets. An important issue that arises is the adverse effect that very uncertain observations have on the quality of our estimates. We suggest an approach to addressing this problem using the specificity of the observations to control its effect. We next consider the OWA aggregation of information represented as trapezoids. An important problem that arises here is the ordering of the trapezoidal fuzzy sets needed for the OWA aggregation. We consider three approaches to accomplish this ordering based on the location, specificity and fuzziness of the trapezoids. From these three different approaches three fundamental methods of ordering are developed. One based on the mean of the 0.5 level sets, another based on the length of the 0.5 level sets and a third based on the difference in lengths of the core and support level sets. Throughout this work particular emphasis is placed on the simplicity of working with trapezoids while still retaining a rich representational capability.
Heden's bound on maximal partial spreads We prove Heden's result that the deficiency δ of a maximal partial spread in PG(3, q ) is greater than 1 + ½ (1+√5)√ q unless δ−1 is a multiple of p , where q=p n . When q is odd and not a square, we are able to improve this lower bound to roughly √3 q .
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.08
0.0223
0.017067
0.003846
0
0
0
0
0
0
0
0
0
0
Efficient Uncertainty Quantification for the Periodic Steady State of Forced and Autonomous Circuits This brief proposes an uncertainty quantification method for the periodic steady-state (PSS) analysis with both Gaussian and non-Gaussian variations. Our stochastic testing formulation for the PSS problem provides superior efficiency over both Monte Carlo methods and existing spectral methods. The numerical implementation of a stochastic shooting Newton solver is presented for both forced and autonomous circuits. Simulation results on some analog/RF circuits are reported to show the effectiveness of our proposed algorithms.
Probabilistic Power Flow Computation via Low-Rank and Sparse Tensor Recovery This paper presents a tensor-recovery method to solve probabilistic power flow problems. Our approach generates a high-dimensional and sparse generalized polynomial-chaos expansion that provides useful statistical information. The result can also speed up other essential routines in power systems (e.g., stochastic planning, operations and controls). Instead of simulating a power flow equation at all quadrature points, our approach only simulates an extremely small subset of samples. We suggest a model to exploit the underlying low-rank and sparse structure of high-dimensional simulation data arrays, making our technique applicable to power systems with many random parameters. We also present a numerical method to solve the resulting nonlinear optimization problem. Our algorithm is implemented in MATLAB and is verified by several benchmarks in MATPOWER $5.1$. Accurate results are obtained for power systems with up to $50$ independent random parameters, with a speedup factor up to $9\times 10^{20}$.
Nonparametric multivariate density estimation: a comparative study The paper algorithmically and empirically studies two major types of nonparametric multivariate density estimation techniques, where no assumption is made about the data being drawn from any of known parametric families of distribution. The first type is the popular kernel method (and several of its variants) which uses locally tuned radial basis (e.g., Gaussian) functions to interpolate the multidimensional density; the second type is based on an exploratory projection pursuit technique which interprets the multidimensional density through the construction of several 1D densities along highly “interesting” projections of multidimensional data. Performance evaluations using training data from mixture Gaussian and mixture Cauchy densities are presented. The results show that the curse of dimensionality and the sensitivity of control parameters have a much more adverse impact on the kernel density estimators than on the projection pursuit density estimators
Phase Noise and Noise Induced Frequency Shift in Stochastic Nonlinear Oscillators Phase noise plays an important role in the performances of electronic oscillators. Traditional approaches describe the phase noise problem as a purely diffusive process. In this paper we develop a novel phase model reduction technique for phase noise analysis of nonlinear oscillators subject to stochastic inputs. We obtain analytical equations for both the phase deviation and the probability density function of the phase deviation. We show that, in general, the phase reduced models include non Markovian terms. Under the Markovian assumption, we demonstrate that the effect of white noise is to generate both phase diffusion and a frequency shift, i.e. phase noise is best described as a convection-diffusion process. The analysis of a solvable model shows the accuracy of our theory, and that it gives better predictions than traditional phase models.
Weakly nonlinear circuit analysis based on fast multidimensional inverse Laplace transform There have been continuing thrusts in developing efficient modeling techniques for circuit simulation. However, most circuit simulation methods are time-domain solvers. In this paper we propose a frequency-domain simulation method based on Laguerre function expansion. The proposed method handles both linear and nonlinear circuits. The Laguerre method can invert multidimensional Laplace transform efficiently with a high accuracy, which is a key step of the proposed method. Besides, an adaptive mesh refinement (AMR) technique is developed and its parallel implementation is introduced to speed up the computation. Numerical examples show that our proposed method can accurately simulate large circuits while enjoying low computation complexity.
Compact model order reduction of weakly nonlinear systems by associated transform AbstractWe advance a recently proposed approach, called the associated transform, for computing slim projection matrices serving high-order Volterra transfer functions in the context of weakly nonlinear model order reduction NMOR. The innovation is to carry out an association of multivariate Laplace variables in high-order multiple-input multiple-output transfer functions to generate univariate single-s transfer functions. In contrast to conventional projection-based NMOR which finds projection subspaces about every si in multivariate transfer functions, only that about a single s is required in the proposed approach. This leads to much more compact reduced-order models without compromising accuracy. Specifically, the proposed NMOR procedure first converts the original set of Volterra transfer functions into a new set of linear transfer functions, which then allows direct utilization of linear MOR techniques for modeling weakly nonlinear systems with either single-tone or multi-tone inputs. An adaptive algorithm is also given to govern the selection of appropriate basis orders in different Volterra transfer functions. Numerical examples then verify the effectiveness of the proposed scheme. Copyright © 2015 John Wiley & Sons, Ltd.
Stochastic testing simulator for integrated circuits and MEMS: Hierarchical and sparse techniques Process variations are a major concern in today's chip design since they can significantly degrade chip performance. To predict such degradation, existing circuit and MEMS simulators rely on Monte Carlo algorithms, which are typically too slow. Therefore, novel fast stochastic simulators are highly desired. This paper first reviews our recently developed stochastic testing simulator that can achieve speedup factors of hundreds to thousands over Monte Carlo. Then, we develop a fast hierarchical stochastic spectral simulator to simulate a complex circuit or system consisting of several blocks. We further present a fast simulation approach based on anchored ANOVA (analysis of variance) for some design problems with many process variations. This approach can reduce the simulation cost and can identify which variation sources have strong impacts on the circuit's performance. The simulation results of some circuit and MEMS examples are reported to show the effectiveness of our simulator.
Performance optimization of VLSI interconnect layout This paper presents a comprehensive survey of existing techniques for interconnect optimization during the VLSI physical design process, with emphasis on recent studies on interconnect design and optimization for high-performance VLSI circuit design under the deep submicron fabrication technologies. First, we present a number of interconnect delay models and driver/gate delay models of various degrees of accuracy and efficiency which are most useful to guide the circuit design and interconnect optimization process. Then, we classify the existing work on optimization of VLSI interconnect into the following three categories and discuss the results in each category in detail: (i) topology optimization for high-performance interconnects, including the algorithms for total wire length minimization, critical path length minimization, and delay minimization; (ii) device and interconnect sizing, including techniques for efficient driver, gate, and transistor sizing, optimal wire sizing, and simultaneous topology construction, buffer insertion, buffer and wire sizing; (iii) high-performance clock routing, including abstract clock net topology generation and embedding, planar clock routing, buffer and wire sizing for clock nets, non-tree clock routing, and clock schedule optimization. For each method, we discuss its effectiveness, its advantages and limitations, as well as its computational efficiency. We group the related techniques according to either their optimization techniques or optimization objectives so that the reader can easily compare the quality and efficiency of different solutions.
A hierarchical floating random walk algorithm for fabric-aware 3D capacitance extraction With the adoption of ultra regular fabric paradigms for controlling design printability at the 22 nm node and beyond, there is an emerging need for a layout-driven, pattern-based parasitic extraction of alternative fabric layouts. In this paper, we propose a hierarchical floating random walk (HFRW) algorithm for computing the 3D capacitances of a large number of topologically different layout configurations that are all composed of the same layout motifs. Our algorithm is not a standard hierarchical domain decomposition extension of the well established floating random walk technique, but rather a novel algorithm that employs Markov Transition Matrices. Specifically, unlike the fast-multipole boundary element method and hierarchical domain decomposition (which use a far-field approximation to gain computational efficiency), our proposed algorithm is exact and does not rely on any tradeoff between accuracy and computational efficiency. Instead, it relies on a tradeoff between memory and computational efficiency. Since floating random walk type of algorithms have generally minimal memory requirements, such a tradeoff does not result in any practical limitations. The main practical advantage of the proposed algorithm is its ability to handle a set of layout configurations in a complexity that is basically independent of the set size. For instance, in a large 3D layout example, the capacitance calculation of 120 different configurations made of similar motifs is accomplished in the time required to solve independently just 2 configurations, i.e. a 60× speedup.
Residual Minimizing Model Interpolation for Parameterized Nonlinear Dynamical Systems. We present a method for approximating the solution of a parameterized, nonlinear dynamical system using an affine combination of solutions computed at other points in the input parameter space. The coefficients of the affine combination are computed with a nonlinear least squares procedure that minimizes the residual of the governing equations. The approximation properties of this residual minimizing scheme are comparable to existing reduced basis and POD-Galerkin model reduction methods, but its implementation requires only independent evaluations of the nonlinear forcing function. It is particularly appropriate when one wishes to approximate the states at a few points in time without time marching from the initial conditions. We prove some interesting characteristics of the scheme, including an interpolatory property, and we present heuristics for mitigating the effects of the ill-conditioning and reducing the overall cost of the method. We apply the method to representative numerical examples from kinetics-a three-state system with one parameter controlling the stiffness-and conductive heat transfer-a nonlinear parabolic PDE with a random field model for the thermal conductivity.
Wavelet-domain compressive signal reconstruction using a Hidden Markov Tree model Compressive sensing aims to recover a sparse or compressible signal from a small set of projections onto random vectors; conventional so- lutions involve linear programming or greedy algorithms that can be computationally expensive. Moreover, these recovery techniques are generic and assume no particular structure in the signal asi de from sparsity. In this paper, we propose a new algorithm that enables fast recovery of piecewise smooth signals, a large and useful class of signals whose sparse wavelet expansions feature a distinct "con- nected tree" structure. Our algorithm fuses recent results on iterative reweighted ℓ1-norm minimization with the wavelet Hidden Markov Tree model. The resulting optimization-based solver outperforms the standard compressive recovery algorithms as well as previously proposed wavelet-based recovery algorithms. As a bonus, the al- gorithm reduces the number of measurements necessary to achieve low-distortion reconstruction.
On the fractional covering number of hypergraphs Thefractional covering number r* of a hypergraphH (V, E) is defined to be the minimum
Sparse representation and learning in visual recognition: Theory and applications Sparse representation and learning has been widely used in computational intelligence, machine learning, computer vision and pattern recognition, etc. Mathematically, solving sparse representation and learning involves seeking the sparsest linear combination of basis functions from an overcomplete dictionary. A rational behind this is the sparse connectivity between nodes in human brain. This paper presents a survey of some recent work on sparse representation, learning and modeling with emphasis on visual recognition. It covers both the theory and application aspects. We first review the sparse representation and learning theory including general sparse representation, structured sparse representation, high-dimensional nonlinear learning, Bayesian compressed sensing, sparse subspace learning, non-negative sparse representation, robust sparse representation, and efficient sparse representation. We then introduce the applications of sparse theory to various visual recognition tasks, including feature representation and selection, dictionary learning, Sparsity Induced Similarity (SIS) measures, sparse coding based classification frameworks, and sparsity-related topics.
On the Estimation of Coherence Low-rank matrix approximations are often used to help scale standard machine learning algorithms to large-scale problems. Recently, matrix coherence has been used to characterize the ability to extract global information from a subset of matrix entries in the context of these low-rank approximations and other sampling-based algorithms, e.g., matrix com- pletion, robust PCA. Since coherence is defined in terms of the singular vectors of a matrix and is expensive to compute, the practical significance of these results largely hinges on the following question: Can we efficiently and accurately estimate the coherence of a matrix? In this paper we address this question. We propose a novel algorithm for estimating coherence from a small number of columns, formally analyze its behavior, and derive a new coherence-based matrix approximation bound based on this analysis. We then present extensive experimental results on synthetic and real datasets that corroborate our worst-case theoretical analysis, yet provide strong support for the use of our proposed algorithm whenever low-rank approximation is being considered. Our algorithm efficiently and accurately estimates matrix coherence across a wide range of datasets, and these coherence estimates are excellent predictors of the effectiveness of sampling-based matrix approximation on a case-by-case basis.
1.024732
0.029602
0.025305
0.023841
0.023841
0.01561
0.009768
0.003973
0.00072
0.000071
0.000001
0
0
0
In-situ soil moisture sensing: Measurement scheduling and estimation using compressive sensing We consider the problem of monitoring soil moisture evolution using a wireless network of in-situ underground sensors. To reduce cost and prolong lifetime, it is highly desirable to rely on fewer measurements and estimate with higher accuracy the original signal (soil moisture temporal evolution). In this paper we explore results from the compressive sensing (CS) literature and examine their applicability to this problem. Our main challenge lies in the selection of two matrices, the measurement matrix and a representation basis. The physical constraints of our problem make it highly nontrivial to select these matrices, so that the latter can sufficient sparsify the underlying signal while at the same time be sufficiently incoherent with the former, two common pre-conditions for CS techniques to work well. We construct a representation basis by exploiting unique features of soil moisture evolution. We show that this basis attains very good tradeoff between its ability to sparsify the signal and its incoherence with measurement matrices that are consistent with our physical constraints. Extensive numerical evaluation is performed on both real, high-resolution soil moisture data and simulated data, and through comparison with a closed-loop scheduling approach. Our results demonstrate that our approach is extremely effective in reconstructing the soil moisture process with high accuracy and low sampling rate.
Efficient cross-correlation via sparse representation in sensor networks Cross-correlation is a popular signal processing technique used in numerous localization and tracking systems for obtaining reliable range information. However, a practical efficient implementation has not yet been achieved on resource constrained wireless sensor network platforms. We propose cross-correlation via sparse representation: a new framework for ranging based on l1-minimization. The key idea is to compress the signal samples on the mote platform by efficient random projections and transfer them to a central device, where a convex optimization process estimates the range by exploiting its sparsity in our proposed correlation domain. Through sparse representation theory validation, extensive empirical studies and experiments on an end-to-end acoustic ranging system implemented on resource limited off-the-shelf sensor nodes, we show that the proposed framework, together with the proposed correlation domain achieved up to two order of magnitude better performance compared to naive approaches such as working on DCT domain and downsampling. Furthermore, compared to cross-correlation results, 30-40% measurements are sufficient to obtain precise range estimates with an additional bias of only 2-6cm for high accuracy application requirements, while 5% measurements are adequate to achieve approximately 100cm precision for lower accuracy applications.
Privacy-Enabled Object Tracking in Video Sequences Using Compressive Sensing In a typical video analysis framework, video sequences are decoded and reconstructed in the pixel domain before being processed for high level tasks such as classification or detection.Nevertheless, in some application scenarios, it might be of interest to complete these analysis tasks without disclosing sensitive data, e.g. the identity of people captured by surveillance cameras. In this paper we propose a new coding scheme suitable for video surveillance applications that allows tracking of video objects without the need to reconstruct the sequence,thus enabling privacy protection. By taking advantage of recent findings in the compressive sensing literature, we encode a video sequence with a limited number of pseudo-random projections of each frame. At the decoder, we exploit the sparsity that characterizes background subtracted images in order to recover the location of the foreground object. We also leverage the prior knowledge about the estimated location of the object, which is predicted by means of a particle filter, to improve the recovery of the foreground object location. The proposed framework enables privacy, in the sense it is impossible to reconstruct the original video content from the encoded random projections alone, as well as secrecy, since decoding is prevented if the seed used to generate the random projections is not available.
Efficient background subtraction for real-time tracking in embedded camera networks Background subtraction is often the first step of many computer vision applications. For a background subtraction method to be useful in embedded camera networks, it must be both accurate and computationally efficient because of the resource constraints on embedded platforms. This makes many traditional background subtraction algorithms unsuitable for embedded platforms because they use complex statistical models to handle subtle illumination changes. These models make them accurate but the computational requirement of these complex models is often too high for embedded platforms. In this paper, we propose a new background subtraction method which is both accurate and computational efficient. The key idea is to use compressive sensing to reduce the dimensionality of the data while retaining most of the information. By using multiple datasets, we show that the accuracy of our proposed background subtraction method is comparable to that of the traditional background subtraction methods. Moreover, real implementation on an embedded camera platform shows that our proposed method is at least 5 times faster, and consumes significantly less energy and memory resources than the conventional approaches. Finally, we demonstrated the feasibility of the proposed method by the implementation and evaluation of an end-to-end real-time embedded camera network target tracking application.
Compressive data gathering for large-scale wireless sensor networks This paper presents the first complete design to apply compressive sampling theory to sensor data gathering for large-scale wireless sensor networks. The successful scheme developed in this research is expected to offer fresh frame of mind for research in both compressive sampling applications and large-scale wireless sensor networks. We consider the scenario in which a large number of sensor nodes are densely deployed and sensor readings are spatially correlated. The proposed compressive data gathering is able to reduce global scale communication cost without introducing intensive computation or complicated transmission control. The load balancing characteristic is capable of extending the lifetime of the entire sensor network as well as individual sensors. Furthermore, the proposed scheme can cope with abnormal sensor readings gracefully. We also carry out the analysis of the network capacity of the proposed compressive data gathering and validate the analysis through ns-2 simulations. More importantly, this novel compressive data gathering has been tested on real sensor data and the results show the efficiency and robustness of the proposed scheme.
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f∈CN and a randomly chosen set of frequencies Ω. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=στ∈Tf(τ)δ(t-τ) obeying |T|≤CM·(log N)-1 · |Ω| for some constant CM0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N-M), f can be reconstructed exactly as the solution to the ℓ1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for CM which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|·logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N-M) would in general require a number of frequency samples at least proportional to |T|·logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
Exact Reconstruction of Sparse Signals via Nonconvex Minimization Several authors have shown recently that It is possible to reconstruct exactly a sparse signal from fewer linear measurements than would be expected from traditional sampling theory. The methods used involve computing the signal of minimum lscr1 norm among those having the given measurements. We show that by replacing the lscr1 norm with the lscrp norm with p < 1, exact reconstruction is possible ...
Fuzzy set methods for qualitative and natural language oriented simulation The author discusses the approach of using fuzzy set theory to create a formal way of viewing the qualitative simulation of models whose states, inputs, outputs, and parameters are uncertain. Simulation was performed using detailed and accurate models, and it was shown how input and output trajectories could reflect linguistic (or qualitative) changes in a system. Uncertain variables are encoded using triangular fuzzy numbers, and three distinct fuzzy simulation approaches (Monte Carlo, correlated and uncorrelated) are defined. The methods discussed are also valid for discrete event simulation; experiments have been performed on the fuzzy simulation of a single server queuing model. In addition, an existing C-based simulation toolkit, SimPack, was augmented to include the capabilities for modeling using fuzzy arithmetic and linguistic association, and a C++ class definition was coded for fuzzy number types
Proactive public key and signature systems Emerging applications like electronic commerce and secure communications over open networks have made clear the fundamental role of public key cryptography as a unique enabler for world-wide scale security solu- tions. On the other hand, these solutions clearly expose the fact that the protection of private keys is a security bottleneck in these sensitive applications. This prob- lem is further worsened in the cases where a single and unchanged private key must be kept secret for very long time (such is the case of certification authority keys, bank and e-cash keys, etc.). One crucial defense against exposure of private keys is offered by threshold cryptography where the pri- vate key functions (like signatures or decryption) are distributed among several parties such that a predeter- mined number of parties must cooperate in order to correctly perform these operations. This protects keys from any single point of failure. An attacker needs to break into a multiplicity of locations before it can com- promise the system. However, in the case of long-lived keys the attacker still has a considerable period of time (like a few years) to gradually break the system. Here we present proactive public key systemswhere the threshold solutions are further enhanced by periodic
Completeness and consistency conditions for learning fuzzy rules The completeness and consistency conditions were introduced in order to achieve acceptable concept recognition rules. In real problems, we can handle noise-affected examples and it is not always possible to maintain both conditions. Moreover, when we use fuzzy information there is a partial matching between examples and rules, therefore the consistency condition becomes a matter of degree. In this paper, a learning algorithm based on soft consistency and completeness conditions is proposed. This learning algorithm combines in a single process rule and feature selection and it is tested on different databases. (C) 1998 Elsevier Science B.V. All rights reserved.
Deblurring from highly incomplete measurements for remote sensing When we take photos, we often get blurred pictures because of hand shake, motion, insufficient light, unsuited focal length, or other disturbances. Recently, a compressed-sensing (CS) theorem which provides a new sampling theory for data acquisition has been applied for medical and astronomic imaging. The CS makes it possible to take superresolution photos using only one or a few pixels, rather th...
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
Fuzzy Concepts and Formal Methods: A Fuzzy Logic Toolkit for Z It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However some system problems, particularly those drawn from the IS problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper suggests fuzzy set theory as a possible representation scheme for this imprecision or approximation. We provide a summary of a toolkit that defines the operators, measures and modifiers necessary for the manipulation of fuzzy sets and relations. We also provide some examples of the laws which establishes an isomorphism between the extended notation presented here and conventional Z when applied to boolean sets and relations.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.064
0.077333
0.048
0.04
0.005365
0.000401
0.000006
0
0
0
0
0
0
0
Sparse Event Detection In Wireless Sensor Networks Using Compressive Sensing Compressive sensing is a revolutionary idea proposed recently to achieve much lower sampling rate for sparse signals. For large wireless sensor networks, the events are relatively sparse compared with the number of sources. Because of deployment cost, the number of sensors is limited, and due to energy constraint, not all the sensors are turned on all the time. In this paper, the first contribution is to formulate the problem for sparse event detection in wireless sensor networks as a compressive sensing problem. The number of (wake-up) sensors can be greatly reduced to the similar level of the number of sparse events, which is much smaller than the total number of sources. Second, we suppose the event has the binary nature, and employ the Bayesian detection using this prior information. Finally, we analyze the performance of the compressive sensing algorithms under the Gaussian noise. From the simulation results, we show that the sampling rate can reduce to 25% without sacrificing performance. With further decreasing the sampling rate, the performance is gradually reduced until 10% of sampling rate. Our proposed detection algorithm has much better performance than the L-1-magic algorithm proposed in the literature.
RIDA: a robust information-driven data compression architecture for irregular wireless sensor networks In this paper, we propose and evaluate RIDA, a novel information-driven architecture for distributed data compression in a sensor network, allowing it to conserve energy and bandwidth and potentially enabling high-rate data sampling. The key idea is to determine the data correlation among a group of sensors based on the value of the data itself to significantly improve compression. Hence, this approach moves beyond traditional data compression schemes which rely only on spatial and temporal data correlation. A logical mapping, which assigns indices to nodes based on the data content, enables simple implementation, on nodes, of data transformation without any other information. The logical mapping approach also adapts particularly well to irregular sensor network topologies. We evaluate our architecture with both Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) on publicly available real-world data sets. Our experiments on both simulation and real data show that 30% of energy and 80-95% of the bandwidth can be saved for typical multi-hop data networks. Moreover, the original data can be retrieved after decompression with a low error of about 3%. Furthermore, we also propose a mechanism to detect and classify missing or faulty nodes, showing accuracy and recall of 95% when half of the nodes in the network are missing or faulty.
Practical data compression in wireless sensor networks: A survey Power consumption is a critical problem affecting the lifetime of wireless sensor networks. A number of techniques have been proposed to solve this issue, such as energy-efficient medium access control or routing protocols. Among those proposed techniques, the data compression scheme is one that can be used to reduce transmitted data over wireless channels. This technique leads to a reduction in the required inter-node communication, which is the main power consumer in wireless sensor networks. In this article, a comprehensive review of existing data compression approaches in wireless sensor networks is provided. First, suitable sets of criteria are defined to classify existing techniques as well as to determine what practical data compression in wireless sensor networks should be. Next, the details of each classified compression category are described. Finally, their performance, open issues, limitations and suitable applications are analyzed and compared based on the criteria of practical data compression in wireless sensor networks.
Multiresolution Spatial and Temporal Coding in a Wireless Sensor Network for Long-Term Monitoring Applications In many WSN (wireless sensor network) applications, such as [1], [2], [3], the targets are to provide long-term monitoring of environments. In such applications, energy is a primary concern because sensor nodes have to regularly report data to the sink and need to continuously work for a very long time so that users may periodically request a rough overview of the monitored environment. On the other hand, users may occasionally query more in-depth data of certain areas to analyze abnormal events. These requirements motivate us to propose a multiresolution compression and query (MRCQ) framework to support in-network data compression and data storage in WSNs from both space and time domains. Our MRCQ framework can organize sensor nodes hierarchically and establish multiresolution summaries of sensing data inside the network, through spatial and temporal compressions. In the space domain, only lower resolution summaries are sent to the sink; the other higher resolution summaries are stored in the network and can be obtained via queries. In the time domain, historical data stored in sensor nodes exhibit a finer resolution for more recent data, and a coarser resolution for older data. Our methods consider the hardware limitations of sensor nodes. So, the result is expected to save sensors' energy significantly, and thus, can support long-term monitoring WSN applications. A prototyping system is developed to verify its feasibility. Simulation results also show the efficiency of MRCQ compared to existing work.
Joint Source–Channel Communication for Distributed Estimation in Sensor Networks Power and bandwidth are scarce resources in dense wireless sensor networks and it is widely recognized that joint optimization of the operations of sensing, processing and communication can result in significant savings in the use of network resources. In this paper, a distributed joint source-channel communication architecture is proposed for energy-efficient estimation of sensor field data at a distant destination and the corresponding relationships between power, distortion, and latency are analyzed as a function of number of sensor nodes. The approach is applicable to a broad class of sensed signal fields and is based on distributed computation of appropriately chosen projections of sensor data at the destination - phase-coherent transmissions from the sensor nodes enable exploitation of the distributed beamforming gain for energy efficiency. Random projections are used when little or no prior knowledge is available about the signal field. Distinct features of the proposed scheme include: (1) processing and communication are combined into one distributed projection operation; (2) it virtually eliminates the need for in-network processing and communication; (3) given sufficient prior knowledge about the sensed data, consistent estimation is possible with increasing sensor density even with vanishing total network power; and (4) consistent signal estimation is possible with power and latency requirements growing at most sublinearly with the number of sensor nodes even when little or no prior knowledge about the sensed data is assumed at the sensor nodes.
Signal Reconstruction From Noisy Random Projections Recent results show that a relatively small number of random projections of a signal can contain most of its salient information. It follows that if a signal is compressible in some orthonormal basis, then a very accurate reconstruction can be obtained from random projections. This "compressive sampling" approach is extended here to show that signals can be accurately recovered from random projections contaminated with noise. A practical iterative algorithm for signal reconstruction is proposed, and potential applications to coding, analog-digital (A/D) conversion, and remote wireless sensing are discussed
Just relax: convex programming methods for identifying sparse signals in noise This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that has been contaminated with additive noise, the goal is to identify which elementary signals participated and to approximate their coefficients. Although many algorithms have been proposed, there is little theory which guarantees that these algorithms can accurately and efficiently solve the problem. This paper studies a method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program. This approach is powerful because the optimization can be completed in polynomial time with standard scientific software. The paper provides general conditions which ensure that convex relaxation succeeds. As evidence of the broad impact of these results, the paper describes how convex relaxation can be used for several concrete signal recovery problems. It also describes applications to channel coding, linear regression, and numerical analysis
Fuzzy Logic and the Resolution Principle The relationship between fuzzy logic and two-valued logic in the context of the first order predicate calctflus is discussed. It is proved that if every clause in a set of clauses is somethblg more than a "half-truth" and the most reliable clause has truth-value a and the most unreliable clause has truth-value b, then we are guaranteed that all the logical con- sequences obtained by repeatedly applying the resolution principle will have truth-value between a and b. The significance of this theorem is also discussed.
Creating knowledge databases for storing and sharing people knowledge automatically using group decision making and fuzzy ontologies Over the last decade, the Internet has undergone a profound change. Thanks to Web 2.0 technologies, the Internet has become a platform where everybody can participate and provide their own personal information and experiences. Ontologies were designed in an effort to sort and categorize all sorts of information. In this paper, an automatized method for retrieving the subjective Internet users information and creating ontologies is described. Thanks to this method, it is possible to automatically create knowledge databases using the common knowledge of a large amount of people. Using these databases, anybody can consult and benefit from the retrieved information. Group decision making methods are used to extract users information and fuzzy ontologies are employed to store the collected knowledge.
Exact Matrix Completion via Convex Optimization We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys $$m\ge C\,n^{1.2}r\log n$$ for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.
Fast image recovery using variable splitting and constrained optimization We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
Heden's bound on maximal partial spreads We prove Heden's result that the deficiency δ of a maximal partial spread in PG(3, q ) is greater than 1 + ½ (1+√5)√ q unless δ−1 is a multiple of p , where q=p n . When q is odd and not a square, we are able to improve this lower bound to roughly √3 q .
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.0576
0.053333
0.053333
0.04
0.014545
0.003152
0
0
0
0
0
0
0
0
Using the extension of DEMATEL to integrate hotel service quality perceptions into a cause-effect model in uncertainty This study proposes an evaluation approach on assessing service quality perceptions based on the fuzzy measure and extension of decision-making trial and evaluation laboratory (EDEMATEL). This research comprises the perception of two study groups, namely, customers and employees, which include 22 evaluation criteria assessed by two groups from the top four English hotels in Taiwan. The human perception on service quality usually involves in imprecision and vagueness. The triangular fuzzy numbers presents the vagueness and subjectivity information, and use defuzzification method to handle the vagueness linguistic information into a crisp value. This study applies fuzzy set theory and EDEMATEL method to evaluate the interrelationships of service quality evaluation criteria and to compromise the group perceptions into a cause and effect model in uncertainty. The proposed approach is an effective method for assessing the group perceptions, and it integrates the different perceptions into a compromised cause and effect model of hotel service quality in uncertainty. The managerial implications are discussed.
A novel fuzzy Dempster-Shafer inference system for brain MRI segmentation Brain Magnetic Resonance Imaging (MRI) segmentation is a challenging task due to the complex anatomical structure of brain tissues as well as intensity non-uniformity, partial volume effects and noise. Segmentation methods based on fuzzy approaches have been developed to overcome the uncertainty caused by these effects. In this study, a novel combination of fuzzy inference system and Dempster-Shafer Theory is applied to brain MRI for the purpose of segmentation where the pixel intensity and the spatial information are used as features. In the proposed modeling, the consequent part of rules is a Dempster-Shafer belief structure. The novelty aspect of this work is that the rules are paraphrased as evidences. The results show that the proposed algorithm, called FDSIS has satisfactory outputs on both simulated and real brain MRI datasets.
A fuzzy multi-criteria model for the industrial cooperation program transaction strategies: A case in Taiwan In international trade offset practices (in Taiwan known as industrial cooperation program, ICP) have received increased attention over the past 20 years. In the coming 10 years, the Taiwanese government may expend roughly US$16 billion for purchasing Patriot-III missiles, P-3 long-range anti-submarine planes, and diesel-engine submarines from the United States through foreign military sale, and can achieve US$8 billion ICP credit the largest in Taiwanese history. Offsets or ICP can be regarded as fuzzy multiple criteria decision-making (MCDM) problems, therefore, the fuzziness and uncertainty of subjective perception should be considered. This paper provides an alternative approach, the non-additive fuzzy integral, to deal with the fuzzy MCDM problems especially when there is dependence among considered criteria. The main purpose of this paper is to discuss with Taiwan's ICP Optimal Offset Transaction Policy and propose a framework of drawing on ICP credit in future. This paper considers the four aspects of policy, ability, economy, and environment, to establish a set of fuzzy AHP multiple criteria decision mode to identify the evaluative criteria variables and project item's order for ICP project. This decision mode was identified as a workable method.
Performance measurement model for Turkish aviation firms using the rough-AHP and TOPSIS methods under fuzzy environment In today's organizations, performance measurement comes more to the foreground with the advancement in the high technology. So as to manage this power, which is an important element of the organizations, it is needed to have a performance measurement system. Increased level of competition in the business environment and higher customer requirements forced industry to establish a new philosophy to measure its performance beyond the existing financial and non-financial based performance indicators. In this paper, a conceptual performance measurement framework that takes into account company-level factors is presented for a real world application problem. In order to use the conceptual framework for measuring performance, a methodology that takes into account both quantitative and qualitative factors and the interrelations between them should be utilized. For this reason, an integrated approach of analytic hierarchy process (AHP) improved by rough sets theory (Rough-AHP) and fuzzy TOPSIS method is proposed to obtain final ranking.
Fuzzy Multi-Criteria Evaluation Of Knowledge Management Tools In the knowledge economy, a key source of sustainable competitive advantage relies on the way to create, share, and utilize knowledge. Knowledge Management (KM) tools assumed an important role in supporting KM activities. The objective of this paper is to aid decision makers to identify the most appropriate KM tool to improve the effectiveness of their organization. In order to rate competing systems of different vendors, we propose an enhanced multi-criteria method, namely fuzzy VIKOR, that takes advantages of fuzzy logic and group decision making to deal with the vagueness and granularity in the linguistic assessments. The method aims to isolate compromise solutions, by providing a maximum group utility and a minimum of an individual regret. A case study is also given to demonstrate the potential of the methodology.
Evaluation model of business intelligence for enterprise systems using fuzzy TOPSIS Evaluation of business intelligence for enterprise systems before buying and deploying them is of vital importance to create decision support environment for managers in organizations. This study aims to propose a new model to provide a simple approach to assess enterprise systems in business intelligence aspects. This approach also helps the decision-maker to select the enterprise system which has suitable intelligence to support managers' decisional tasks. Using wide literature review, 34 criteria about business intelligence specifications are determined. A model that exploits fuzzy TOPSIS technique has been proposed in this research. Fuzzy weights of the criteria and fuzzy judgments about enterprise systems as alternatives are employed to compute evaluation scores and ranking. This application is realized to illustrate the utilization of the model for the evaluation problems of enterprise systems. On this basis, organizations will be able to select, assess and purchase enterprise systems which make possible better decision support environment in their work systems.
Designing a model of fuzzy TOPSIS in multiple criteria decision making Decision making is the process of finding the best option among the feasible alternatives. In classical multiple attribute decision making (MADM) methods, the ratings and the weights of the criteria are known precisely. Due to vagueness of the decision data, the crisp data are inadequate for real-life situations. Since human judgments including preferences are often vague and cannot be expressed by exact numerical values, the application of fuzzy concepts in decision making is deemed to be relevant. We design a model of TOPSIS for the fuzzy environment with the introduction of appropriate negations for obtaining ideal solutions. Here, we apply a new measurement of fuzzy distance value with a lower bound of alternatives. Then similarity degree is used for ranking of alternatives. Examples are shown to demonstrate capabilities of the proposed model.
An ELECTRE-based outranking method for multiple criteria group decision making using interval type-2 fuzzy sets The aim of this paper is to develop an ELECTRE (ELimination Et Choice Translating REality)-based outranking method for multiple criteria group decision-making within the environment of interval type-2 fuzzy sets. Along with considering the context of interval type-2 trapezoidal fuzzy numbers, this paper employs a hybrid averaging approach with signed distances to construct a collective decision matrix and proposes the use of ELECTRE-based outranking methods to analyze the collective interval type-2 fuzzy data. By applying a signed distance approach, this work identifies the concordance and discordance sets to determine the concordance and discordance indices, respectively, for each pair of alternatives. Based on an aggregate outranking matrix, a decision graph is constructed to determine the partial-preference ordering of the alternatives and the ELECTREcally non-outranked solutions. This paper provides additional approaches at the final selection stage to yield a linear ranking order of the alternatives. The feasibility and applicability of the proposed methods are illustrated with an example that addresses supplier selection, and a comparative analysis is performed with other approaches to validate the effectiveness of the proposed methodology.
Accuracy and complexity evaluation of defuzzification strategies for the discretised interval type-2 fuzzy set. The work reported in this paper addresses the challenge of the efficient and accurate defuzzification of discretised interval type-2 fuzzy sets. The exhaustive method of defuzzification for type-2 fuzzy sets is extremely slow, owing to its enormous computational complexity. Several approximate methods have been devised in response to this bottleneck. In this paper we survey four alternative strategies for defuzzifying an interval type-2 fuzzy set: (1) The Karnik-Mendel Iterative Procedure, (2) the Wu-Mendel Approximation, (3) the Greenfield-Chiclana Collapsing Defuzzifier, and (4) the Nie-Tan Method.We evaluated the different methods experimentally for accuracy, by means of a comparative study using six representative test sets with varied characteristics, using the exhaustive method as the standard. A preliminary ranking of the methods was achieved using a multicriteria decision making methodology based on the assignment of weights according to performance. The ranking produced, in order of decreasing accuracy, is (1) the Collapsing Defuzzifier, (2) the Nie-Tan Method, (3) the Karnik-Mendel Iterative Procedure, and (4) the Wu-Mendel Approximation.Following that, a more rigorous analysis was undertaken by means of the Wilcoxon Nonparametric Test, in order to validate the preliminary test conclusions. It was found that there was no evidence of a significant difference between the accuracy of the collapsing and Nie-Tan Methods, and between that of the Karnik-Mendel Iterative Procedure and the Wu-Mendel Approximation. However, there was evidence to suggest that the collapsing and Nie-Tan Methods are more accurate than the Karnik-Mendel Iterative Procedure and the Wu-Mendel Approximation.In relation to efficiency, each method's computational complexity was analysed, resulting in a ranking (from least computationally complex to most computationally complex) as follows: (1) the Nie-Tan Method, (2) the Karnik-Mendel Iterative Procedure (lowest complexity possible), (3) the Greenfield-Chiclana Collapsing Defuzzifier, (4) the Karnik-Mendel Iterative Procedure (highest complexity possible), and (5) the Wu-Mendel Approximation. (C) 2013 Elsevier Inc. All rights reserved.
Relationship between similarity measure and entropy of interval valued fuzzy sets In this paper, we introduce concepts of entropy of interval valued fuzzy set which is different from Bustince and Burillo [Entropy on intuitionistic fuzzy sets and on interval-valued fuzzy sets, Fuzzy Sets and Systems 78 (1996) 305-316] and similarity measure of interval valued fuzzy sets, discuss their relationship between similarity measure and entropy of interval valued fuzzy sets in detail, prove three theorems that similarity measure and entropy of interval valued fuzzy sets can be transformed by each other based on their axiomatic definitions and put forward some formulas to calculate entropy and similarity measure of interval valued fuzzy sets.
Anaphora for everyone: pronominal anaphora resoluation without a parser We present an algorithm for anaphora resolution which is a modified and extended version of that developed by (Lappin and Leass, 1994). In contrast to that work, our algorithm does not require in-depth, full, syntactic parsing of text. Instead, with minimal compromise in output quality, the modifications enable the resolution process to work from the output of a part of speech tagger, enriched only with annotations of grammatical function of lexical items in the input text stream. Evaluation of the results of our implementation demonstrates that accurate anaphora resolution can be realized within natural language processing frameworks which do not---or cannot--- employ robust and reliable parsing components.
Exact Reconstruction of Sparse Signals via Nonconvex Minimization Several authors have shown recently that It is possible to reconstruct exactly a sparse signal from fewer linear measurements than would be expected from traditional sampling theory. The methods used involve computing the signal of minimum lscr1 norm among those having the given measurements. We show that by replacing the lscr1 norm with the lscrp norm with p < 1, exact reconstruction is possible ...
Low-dimensional signal-strength fingerprint-based positioning in wireless LANs Accurate location awareness is of paramount importance in most ubiquitous and pervasive computing applications. Numerous solutions for indoor localization based on IEEE802.11, bluetooth, ultrasonic and vision technologies have been proposed. This paper introduces a suite of novel indoor positioning techniques utilizing signal-strength (SS) fingerprints collected from access points (APs). Our first approach employs a statistical representation of the received SS measurements by means of a multivariate Gaussian model by considering a discretized grid-like form of the indoor environment and by computing probability distribution signatures at each cell of the grid. At run time, the system compares the signature at the unknown position with the signature of each cell by using the Kullback-Leibler Divergence (KLD) between their corresponding probability densities. Our second approach applies compressive sensing (CS) to perform sparsity-based accurate indoor localization, while reducing significantly the amount of information transmitted from a wireless device, possessing limited power, storage, and processing capabilities, to a central server. The performance evaluation which was conducted at the premises of a research laboratory and an aquarium under real-life conditions, reveals that the proposed statistical fingerprinting and CS-based localization techniques achieve a substantial localization accuracy.
On the Estimation of Coherence Low-rank matrix approximations are often used to help scale standard machine learning algorithms to large-scale problems. Recently, matrix coherence has been used to characterize the ability to extract global information from a subset of matrix entries in the context of these low-rank approximations and other sampling-based algorithms, e.g., matrix com- pletion, robust PCA. Since coherence is defined in terms of the singular vectors of a matrix and is expensive to compute, the practical significance of these results largely hinges on the following question: Can we efficiently and accurately estimate the coherence of a matrix? In this paper we address this question. We propose a novel algorithm for estimating coherence from a small number of columns, formally analyze its behavior, and derive a new coherence-based matrix approximation bound based on this analysis. We then present extensive experimental results on synthetic and real datasets that corroborate our worst-case theoretical analysis, yet provide strong support for the use of our proposed algorithm whenever low-rank approximation is being considered. Our algorithm efficiently and accurately estimates matrix coherence across a wide range of datasets, and these coherence estimates are excellent predictors of the effectiveness of sampling-based matrix approximation on a case-by-case basis.
1.052295
0.055712
0.053467
0.053467
0.05
0.019507
0.005445
0.00102
0.000348
0.00006
0
0
0
0
Hall's theorem for hypergraphs We prove a hypergraph version of Hall's theorem. The proof is topological. © 2000 John Wiley & Sons, Inc. J Graph Theory 35: 83–88, 2000
On a Generalization of the Ryser-Brualdi-Stein Conjecture AbstractA rainbow matching for not necessarily distinct sets F1,',Fk of hypergraph edges is a matching consisting of k edges, one from each Fi. The aim of the article is twofold-to put order in the multitude of conjectures that relate to this concept some first presented here, and to prove partial results on one of the central conjectures.
Vector Representation of Graph Domination We study a function on graphs, denoted by “Gamma”, representing vectorially the domination number of a graph, in a way similar to that in which the Lovsz Theta function represents the independence number of a graph. This function is a lower bound on the homological connectivity of the independence complex of the graph, and hence is of value in studying matching problems by topological methods. Not much is known at present about the Gamma function, in particular, there is no known procedure for its computation for general graphs. In this article we compute the precise value of Gamma for trees and cycles, and to achieve this we prove new lower and upper bounds on Gamma, formulated in terms of known domination and algebraic parameters of the graph. We also use the Gamma function to prove a fractional version of a strengthening of Ryser's conjecture. © 2011 Wiley Periodicals, Inc. J Graph Theory © 2012 Wiley Periodicals, Inc.
The Clique Complex and Hypergraph Matching   The width of a hypergraph is the minimal for which there exist such that for any , for some . The matching width of is the minimal such that for any matching there exist such that for any , for some . The following extension of the Aharoni-Haxell matching Theorem [3] is proved: Let be a family of hypergraphs such that for each either or , then there exists a matching such that for all . This is a consequence of a more general result on colored cliques in graphs. The proofs are topological and use the Nerve Theorem.
Ryser's Conjecture for Tripartite 3-Graphs   We prove that in a tripartite 3-graph .
Characterization of graphs with equal domination and covering number Let G be a simple graph of order n ( G ). A vertex set D of G is dominating if every vertex not in D is adjacent to some vertex in D , and D is a covering if every edge of G has at least one end in D . The domination number γ ( G ) is the minimum order of a dominating set, and the covering number β ( G ) is the minimum order of a covering set in G . In 1981, Laskar and Walikar raised the question of characterizing those connected graphs for which γ ( G ) = β ( G ). It is the purpose of this paper to give a complete solution of this problem. This solution shows that the recognition problem, whether a connected graph G has the property γ ( G ) = β ( G ), is solvable in polynomial time. As an application of our main results we determine all connected extremal graphs in the well-known inequality γ(G) ⩽ [ n(G) 2 ] of Ore (1962), which extends considerable a result of Payan and Xuong from 1982. With a completely different method, independently around the same time, Cockayne, Haynes and Hedetniemi also characterized the connected graphs G with γ(G) = [ n(G) 2 ] .
Hypergraphs with large domination number and with edge sizes at least three Let H=(V,E) be a hypergraph with vertex set V and edge set E. A dominating set in H is a subset of vertices D@?V such that for every vertex v@?V@?D there exists an edge e@?E for which v@?e and e@?D0@?. The domination number @c(H) is the minimum cardinality of a dominating set in H. It is known that if H is a hypergraph of order n with edge sizes at least three and with no isolated vertex, then @c(H)@?n/3. In this paper, we characterize the hypergraphs achieving equality in this bound.
Type-2 Fuzzy Decision Trees This paper presents type-2 fuzzy decision trees (T2FDTs) that employ type-2 fuzzy sets as values of attributes. A modified fuzzy double clustering algorithm is proposed as a method for generating type-2 fuzzy sets. This method allows to create T2FDTs that are easy to interpret and understand. To illustrate performace of the proposed T2FDTs and in order to compare them with results obtained for type-1 fuzzy decision trees (T1FDTs), two benchmark data sets, available on the internet, have been used.
Approximate Volume and Integration for Basic Semialgebraic Sets Given a basic compact semialgebraic set $\mathbf{K}\subset\mathbb{R}^n$, we introduce a methodology that generates a sequence converging to the volume of $\mathbf{K}$. This sequence is obtained from optimal values of a hierarchy of either semidefinite or linear programs. Not only the volume but also every finite vector of moments of the probability measure that is uniformly distributed on $\mathbf{K}$ can be approximated as closely as desired, which permits the approximation of the integral on $\mathbf{K}$ of any given polynomial; the extension to integration against some weight functions is also provided. Finally, some numerical issues associated with the algorithms involved are briefly discussed.
Statistical leakage estimation based on sequential addition of cell leakage currents This paper presents a novel method for full-chip statistical leakage estimation that considers the impact of process variation. The proposed method considers the correlations among leakage currents in a chip and the state dependence of the leakage current of a cell for an accurate analysis. For an efficient addition of the cell leakage currents, we propose the virtual-cell approximation (VCA), which sums cell leakage currents sequentially by approximating their sum as the leakage current of a single virtual cell while preserving the correlations among leakage currents. By the use of the VCA, the proposed method efficiently calculates a full-chip leakage current. Experimental results using ISCAS benchmarks at various process variation levels showed that the proposed method provides an accurate result by demonstrating average leakage mean and standard deviation errors of 3.12% and 2.22%, respectively, when compared with the results of a Monte Carlo (MC) simulation-based leakage estimation. In efficiency, the proposed method also demonstrated to be 5000 times faster than MC simulation-based leakage estimations and 9000 times faster than the Wilkinson's method-based leakage estimation.
A note on compressed sensing and the complexity of matrix multiplication We consider the conjectured O(N^2^+^@e) time complexity of multiplying any two NxN matrices A and B. Our main result is a deterministic Compressed Sensing (CS) algorithm that both rapidly and accurately computes A@?B provided that the resulting matrix product is sparse/compressible. As a consequence of our main result we increase the class of matrices A, for any given NxN matrix B, which allows the exact computation of A@?B to be carried out using the conjectured O(N^2^+^@e) operations. Additionally, in the process of developing our matrix multiplication procedure, we present a modified version of Indyk's recently proposed extractor-based CS algorithm [P. Indyk, Explicit constructions for compressed sensing of sparse signals, in: SODA, 2008] which is resilient to noise.
Estimating human pose from occluded images We address the problem of recovering 3D human pose from single 2D images, in which the pose estimation problem is formulated as a direct nonlinear regression from image observation to 3D joint positions. One key issue that has not been addressed in the literature is how to estimate 3D pose when humans in the scenes are partially or heavily occluded. When occlusions occur, features extracted from image observations (e.g., silhouettes-based shape features, histogram of oriented gradient, etc.) are seriously corrupted, and consequently the regressor (trained on un-occluded images) is unable to estimate pose states correctly. In this paper, we present a method that is capable of handling occlusions using sparse signal representations, in which each test sample is represented as a compact linear combination of training samples. The sparsest solution can then be efficiently obtained by solving a convex optimization problem with certain norms (such as l1-norm). The corrupted test image can be recovered with a sparse linear combination of un-occluded training images which can then be used for estimating human pose correctly (as if no occlusions exist). We also show that the proposed approach implicitly performs relevant feature selection with un-occluded test images. Experimental results on synthetic and real data sets bear out our theory that with sparse representation 3D human pose can be robustly estimated when humans are partially or heavily occluded in the scenes.
Generalized Boolean Methods of Information Retrieval In most operational information retrieval systems the Standard retrieval methods based on set theory and binary logic are used. These methods would be much more attractive if they could be extended to include the importance of various index terms in document representations and search request formulations, in addition to a weighting mechanism which could be applied to rank the retrieved documents. This observation has been widely recognized in the literature as such extended retrieval methods could provide the precision of a Boolean search and the advantages of a ranked output. However, a closer examination of all the reported work reveals that up to the present the only possible approach of sufficient consistency and rigorousness is that based on recently developed fuzzy set theory and fuzzy logic. As the concept of a fuzzy set is a generalization of the conventional notion of a set, the generalization of the information retrieval methods based on set theory and binary logic can be derived in a natural way. The present paper describes such generalized Boolean information retrieval methods. The presentation of each includes an outline of its advantages and disadvan-tages, and the relationships between each particular method and the corresponding Standard information retrieval method based on set theory and binary logic are also discussed. It has been shown that these Standard retrieval methods are particular cases of information retrieval methods based on the theory of fuzzy sets and fuzzy logic. The considerations concerning the information retrieval methods presented are illustrated by simple examples.
Implementing Competitive Learning in a Quantum System Ideas from quantum computation are applied to the field of neural networks to produce competitive learning in a quantum system. The resulting quantum competitive learner has a prototype storage capacity that is exponentially greater than that of its classical counterpart. Further, empirical results from simulation of the quantum competitive learning system on real-world data sets demonstrate the quantum system's potential for excellent performance.
1.023582
0.033557
0.033557
0.032046
0.013305
0.000275
0.000057
0
0
0
0
0
0
0
Multi-attribute fuzzy time series method based on fuzzy clustering Traditional time series methods can predict the seasonal problem, but fail to forecast the problems with linguistic value. An alternative forecasting method such as fuzzy time series is utilized to deal with these kinds of problems. Two shortcomings of the existing fuzzy time series forecasting methods are that they lack persuasiveness in determining universe of discourse and the length of intervals, and that they lack objective method for multiple-attribute fuzzy time series. This paper introduces a novel multiple-attribute fuzzy time series method based on fuzzy clustering. The methods of fuzzy clustering are integrated in the processes of fuzzy time series to partition datasets objectively and enable processing of multiple attributes. For verification, this paper uses two datasets: (1) the yearly data on enrollments at the University of Alabama, and (2) the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) futures. The forecasting results show that the proposed method can forecast not only one-attribute but also multiple-attribute data effectively and outperform the listing methods.
Temperature prediction based on fuzzy clustering and fuzzy rules interpolation techniques In this paper, we present a new method to deal with temperature prediction based on fuzzy clustering and fuzzy rules interpolation techniques. First, the proposed method constructs fuzzy rules from training samples based on the fuzzy C-Means clustering algorithm, where each fuzzy rule corresponds to a cluster and the linguistic terms appearing in the fuzzy rules are represented by triangular fuzzy sets. Then, it performs fuzzy inference based on the multiple fuzzy rules interpolation scheme, where it calculates the weight of each fuzzy rule with respect to the input observation based on the defuzzified values of triangular fuzzy sets. Finally, it uses the weight of each fuzzy rule to calculate the forecasted output. We also apply the proposed method to handle the temperature prediction problem. The experimental result shows that the proposed method gets higher average forecasting accuracy rates than Chen and Hwang's method [7].
Partitions based computational method for high-order fuzzy time series forecasting In this paper, we present a computational method of forecasting based on multiple partitioning and higher order fuzzy time series. The developed computational method provides a better approach to enhance the accuracy in forecasted values. The objective of the present study is to establish the fuzzy logical relations of different order for each forecast. Robustness of the proposed method is also examined in case of external perturbation that causes the fluctuations in time series data. The general suitability of the developed model has been tested by implementing it in forecasting of student enrollments at University of Alabama. Further it has also been implemented in the forecasting the market price of share of State Bank of India (SBI) at Bombay Stock Exchange (BSE), India. In order to show the superiority of the proposed model over few existing models, the results obtained have been compared in terms of mean square and average forecasting errors.
Cardinality-based fuzzy time series for forecasting enrollments Forecasting activities are frequent and widespread in our life. Since Song and Chissom proposed the fuzzy time series in 1993, many previous studies have proposed variant fuzzy time series models to deal with uncertain and vague data. A drawback of these models is that they do not consider appropriately the weights of fuzzy relations. This paper proposes a new method to build weighted fuzzy rules by computing cardinality of each fuzzy relation to solve above problems. The proposed method is able to build the weighted fuzzy rules based on concept of large itemsets of Apriori. The yearly data on enrollments at the University of Alabama are adopted to verify and evaluate the performance of the proposed method. The forecasting accuracies of the proposed method are better than other methods.
The Roles of Fuzzy Logic and Soft Computing in the Conception, Design and Deployment of Intelligent Systems The essence of soft computing is that, unlike the traditional, hard computing, it is aimed at an accommodation with the pervasive imprecision of the real world. Thus, the guiding principle of soft computing is: ‘...exploit the tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness, low solution cost and better rapport with reality’. In the final analysis, the role model for soft computing is the human mind.
A hybrid forecasting model for enrollments based on aggregated fuzzy time series and particle swarm optimization In this paper, a new forecasting model based on two computational methods, fuzzy time series and particle swarm optimization, is presented for academic enrollments. Most of fuzzy time series forecasting methods are based on modeling the global nature of the series behavior in the past data. To improve forecasting accuracy of fuzzy time series, the global information of fuzzy logical relationships is aggregated with the local information of latest fuzzy fluctuation to find the forecasting value in fuzzy time series. After that, a new forecasting model based on fuzzy time series and particle swarm optimization is developed to adjust the lengths of intervals in the universe of discourse. From the empirical study of forecasting enrollments of students of the University of Alabama, the experimental results show that the proposed model gets lower forecasting errors than those of other existing models including both training and testing phases.
A 2uFunction representation for non-uniform type-2 fuzzy sets: Theory and design The theoretical and computational complexities involved in non-uniform type-2 fuzzy sets (T2 FSs) are main obstacles to apply these sets to modeling high-order uncertainties. To reduce the complexities, this paper introduces a 2uFunction representation for T2 FSs. This representation captures the ideas from probability theory. By using this representation, any non-uniform T2 FS can be represented by a function of two uniform T2 FSs. In addition, any non-uniform T2 fuzzy logic system (FLS) can be indirectly designed by two uniform T2 FLSs. In particular, a 2uFunction-based trapezoid T2 FLS is designed. Then, it is applied to the problem of forecasting Mackey-Glass time series corrupted by two kinds of noise sources: (1) stationary and (2) non-stationary additive noises. Finally, the performance of the proposed FLS is compared by (1) other types of FLS: T1 FLS and uniform T2 FLS, and (2) other studies: ANFIS [54], IT2FNN-1 [54], T2SFLS [3] and Q-T2FLS [35]. Comparative results show that the proposed design has a low prediction error as well as is suitable for online applications.
An optimization method for designing type-2 fuzzy inference systems based on the footprint of uncertainty using genetic algorithms This paper proposes an optimization method for designing type-2 fuzzy inference systems based on the footprint of uncertainty (FOU) of the membership functions, considering three different cases to reduce the complexity problem of searching the parameter space of solutions. For the optimization method, we propose the use of a genetic algorithm (GA) to optimize the type-2 fuzzy inference systems, considering different cases for changing the level of uncertainty of the membership functions to reach the optimal solution at the end.
Ranking type-2 fuzzy numbers Type-2 fuzzy sets are a generalization of the ordinary fuzzy sets in which each type-2 fuzzy set is characterized by a fuzzy membership function. In this paper, we consider the problem of ranking a set of type-2 fuzzy numbers. We adopt a statistical viewpoint and interpret each type-2 fuzzy number as an ensemble of ordinary fuzzy numbers. This enables us to define a type-2 fuzzy rank and a type-2 rank uncertainty for each intuitionistic fuzzy number. We show the reasonableness of the results obtained by examining several test cases
Anaphora for everyone: pronominal anaphora resoluation without a parser We present an algorithm for anaphora resolution which is a modified and extended version of that developed by (Lappin and Leass, 1994). In contrast to that work, our algorithm does not require in-depth, full, syntactic parsing of text. Instead, with minimal compromise in output quality, the modifications enable the resolution process to work from the output of a part of speech tagger, enriched only with annotations of grammatical function of lexical items in the input text stream. Evaluation of the results of our implementation demonstrates that accurate anaphora resolution can be realized within natural language processing frameworks which do not---or cannot--- employ robust and reliable parsing components.
Phase noise in oscillators: a unifying theory and numerical methods for characterization Phase noise is a topic of theoretical and practical interest in electronic circuits, as well as in other fields, such as optics. Although progress has been made in understanding the phenomenon, there still remain significant gaps, both in its fundamental theory and in numerical techniques for its characterization. In this paper, we develop a solid foundation for phase noise that is valid for any oscillator, regardless of operating mechanism. We establish novel results about the dynamics of stable nonlinear oscillators in the presence of perturbations, both deterministic and random. We obtain an exact nonlinear equation for phase error, which we solve without approximations for random perturbations. This leads us to a precise characterization of timing jitter and spectral dispersion, for computing of which we have developed efficient numerical methods. We demonstrate our techniques on a variety of practical electrical oscillators and obtain good matches with measurements, even at frequencies close to the carrier, where previous techniques break down. Our methods are more than three orders of magnitude faster than the brute-force Monte Carlo approach, which is the only previously available technique that can predict phase noise correctly
Membership maximization prioritization methods for fuzzy analytic hierarchy process Fuzzy analytic hierarchy process (FAHP) has increasingly been applied in many areas. Extent analysis method is the popular tool for prioritization in FAHP, although significant technical errors are identified in this study. With addressing the errors, this research proposes membership maximization prioritization methods (MMPMs) using different membership functions as the novel solutions. As a lack of research about effectiveness measurement on the crisp/fuzzy prioritization methods, this study proposes membership fitness index to evaluate the effectiveness of the prioritization methods. Comparisons with the other popular fuzzy/crisp prioritization methods including modified fuzzy preference programming, Direct least squares, and Eigen value are conducted and analyses indicate that MMPMs lead to much more reliable result in view of membership fitness index. A numerical example demonstrates the usability of MMPMs for FAHP, and thus MMPMs can effectively be applied to various decision analysis applications.
A simple proof that random matrices are democratic The recently introduced theory of compressive sensing (CS) enables the reconstruction of sparse or compressible signals from a small set of nonadaptive, linear measurements. If properly chosen, the number of measurements can be significantly smaller than the ambient dimension of the signal and yet preserve the significant signal information. Interestingly, it can be shown that random measurement schemes provide a near-optimal encoding in terms of the required number of measurements. In this report, we explore another relatively unexplored, though often alluded to, advantage of using random matrices to acquire CS measurements. Specifically, we show that random matrices are democractic, meaning that each measurement carries roughly the same amount of signal information. We demonstrate that by slightly increasing the number of measurements, the system is robust to the loss of a small number of arbitrary measurements. In addition, we draw connections to oversampling and demonstrate stability from the loss of significantly more measurements.
Implementing Competitive Learning in a Quantum System Ideas from quantum computation are applied to the field of neural networks to produce competitive learning in a quantum system. The resulting quantum competitive learner has a prototype storage capacity that is exponentially greater than that of its classical counterpart. Further, empirical results from simulation of the quantum competitive learning system on real-world data sets demonstrate the quantum system's potential for excellent performance.
1.036897
0.043182
0.035304
0.030379
0.028111
0.016924
0.002264
0.000054
0.000001
0
0
0
0
0
Impact of interconnect variations on the clock skew of a gigahertz microprocessor Due to the large die sizes and tight relative clock skew margins, the impact of interconnect manufacturing variations on the clock skew in today's gigahertz microprocessors can no longer be ignored. Unlike manufacturing variations in the devices, the impact of the interconnect manufacturing variations on IC timing performance cannot be captured by worst/best case corner point methods. Thus it is difficult to estimate the clock skew variability due to interconnect variations. In this paper we analyze the timing impact of several key statistically independent interconnect variations in a context-dependent manner by applying a previously reported interconnect variational order-reduction technique. The results show that the interconnect variations can cause up to 25% clock skew variability in a modern microprocessor design.
Proceedings of the 41th Design Automation Conference, DAC 2004, San Diego, CA, USA, June 7-11, 2004
VITA: variation-aware interconnect timing analysis for symmetric and skewed sources of variation considering variational ramp input As technology scales down, timing verification of digital integrated circuits becomes an extremely difficult task due to statistical variations in the gate and wire delays. Statistical timing analysis techniques are being developed to tackle this important problem. In this paper, we propose a new framework for handling variation-aware interconnect timing analysis in which the sources of variation may have symmetric or skewed distributions. To achieve this goal, we express the resistance and capacitance of a line in canonical first order forms and then use these to compute the circuit moments. The variational moments are subsequently used to compute the interconnect delay and slew at each node of an RC tree. For this step, we combine known closed-form delay metrics such as Elmore and AWE-based algorithms to take advantage of the efficiency of the first category and the accuracy of the second. Experimental results show an average error of 2% for interconnect delay and slew with respect to SPICE-based Monte Carlo simulations.
Fast Analysis of a Large-Scale Inductive Interconnect by Block-Structure-Preserved Macromodeling To efficiently analyze the large-scale interconnect dominant circuits with inductive couplings (mutual inductances), this paper introduces a new state matrix, called VNA, to stamp inverse-inductance elements by replacing inductive-branch current with flux. The state matrix under VNA is diagonal-dominant, sparse, and passive. To further explore the sparsity and hierarchy at the block level, a new matrix-stretching method is introduced to reorder coupled fluxes into a decoupled state matrix with a bordered block diagonal (BBD) structure. A corresponding block-structure-preserved model-order reduction, called BVOR, is developed to preserve the sparsity and hierarchy of the BBD matrix at the block level. This enables us to efficiently build and simulate the macromodel within a SPICE-like circuit simulator. Experiments show that our method achieves up to 7× faster modeling building time, up to 33× faster simulation time, and as much as 67× smaller waveform error compared to SAPOR [a second-order reduction based on nodal analysis (NA)] and PACT (a first-order 2×2 structured reduction based on modified NA).
Computation and Refinement of Statistical Bounds on Circuit Delay The growing impact of within-die process variation has created the need for statistical timing analysis, where gate delays are modeled as random variables. Statistical timing analysis has traditionally suffered from exponential run time complexity with circuit size, due to arrival time dependencies created by reconverging paths in the circuit. In this paper, we propose a new approach to statistical timing analysis that is based on statistical bounds of the circuit delay. Since these bounds have linear run time complexity with circuit size, they can be computed efficiently for large circuits. Since both a lower and upper bound on the true statistical delay is available, the quality of the bounds can be determined. If the computed bounds are not sufficiently close to each other, we propose a heuristic to iteratively improve the bounds using selective enumeration of the sample space with additional run time. We demonstrate that the proposed bounds have only a small error and that by carefully selecting an small set of nodes for enumeration, this error can be further improved.
Recent computational developments in Krylov subspace methods for linear systems Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters. Copyright (c) 2006 John Wiley & Sons, Ltd.
Statistical Timing for Parametric Yield Prediction of Digital Integrated Circuits Uncertainty in circuit performance due to manufacturing and environmental variations is increasing with each new generation of technology. It is therefore important to predict the performance of a chip as a probabilistic quantity. This paper proposes three novel path-based algorithms for statistical timing analysis and parametric yield prediction of digital integrated circuits. The methods have been implemented in the context of the EinsTimer static timing analyzer. The three methods are complementary in that they are designed to target different process variation conditions that occur in practice. Numerical results are presented to study the strengths and weaknesses of these complementary approaches. Timing analysis results in the face of statistical temperature and Vdd variations are presented on an industrial ASIC part on which a bounded timing methodology leads to surprisingly wrong results
On the Passivity of Polynomial Chaos-Based Augmented Models for Stochastic Circuits. This paper addresses for the first time the issue of passivity of the circuit models produced by means of the generalized polynomial chaos technique in combination with the stochastic Galerkin method. This approach has been used in literature to obtain statistical information through the simulation of an augmented but deterministic instance of a stochastic circuit, possibly including distributed t...
VGTA: Variation Aware Gate Timing Analysis As technology scales down, timing verification of digital integrated circuits becomes an extremely difficult task due to gate and wire variability. Therefore, statistical timing analysis is inevitable. Most timing tools divide the analysis into two parts: 1) interconnect (wire) timing analysis and 2) gate timing analysis. Variational interconnect delay calculation for blockbased TA has been recently studied. However, variational gate delay calculation has remained unexplored. In this paper, we propose a new framework to handle the variation-aware gate timing analysis in block-based TA. First, we present an approach to approximate variational RC- load by using a canonical first-order model. Next, an efficient variation-aware effective capacitance calculation based on statistical input transition, statistical gate timing library, and statistical RC- load is presented. In this step, we use a single-iteration Ceff calculation which is efficient and reasonably accurate. Finally we calculate the statistical gate delay and output slew based on the aforementioned model. Experimental results show an average error of 7% for gate delay and output slew with respect to the HSPICE Monte Carlo simulation while the runtime is about 145 times faster.
Estimation of delay variations due to random-dopant fluctuations in nano-scaled CMOS circuits In nanoscale CMOS circuits the random dopant fluctuations (RDF) cause significant threshold voltage (Vt) variations in transistors. In this paper, we propose a semi-analytical estimation methodology to predict the delay distribution [Mean and Standard Deviation (STD)] of logic circuits considering Vt variation in transistors. The proposed method is fast and can be used to predict delay distributio...
An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems. We propose a new fast algorithm for solving one of the standard approaches to ill-posed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, nonsmoothness) preclude the use of off-the-shelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either total-variation or wavelet-based (or, more generally, frame-based) regularization. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the state-of-the-art.
Compressive sensing of light fields We propose a novel camera design for light field image acquisition using compressive sensing. By utilizing a randomly coded non-refractive mask in front of the aperture, incoherent measurements of the light passing through different regions are encoded in the captured images. A novel reconstruction algorithm is proposed to recover the original light field image from these acquisitions. Using the principles of compressive sensing, we demonstrate that light field images with high angular dimension can be captured with only a few acquisitions. Moreover, the proposed design provides images with high spatial resolution and signal-to-noise-ratio (SNR), and therefore does not suffer from limitations common to existing light-field camera designs. Experimental results demonstrate the efficiency of the proposed system.
An Evaluation of Parameterized Gradient Based Routing With QoE Monitoring for Multiple IPTV Providers. Future communication networks will be faced with increasing and variable traffic demand, due largely to various services introduced on the Internet. One particular service that will greatly impact resource management of future communication networks is IPTV, which aims to provide users with a multitude of multimedia services (e.g. HD and SD) for both live and on demand streaming. The impact of thi...
Overview of HEVC High-Level Syntax and Reference Picture Management The increasing proportion of video traffic in telecommunication networks puts an emphasis on efficient video compression technology. High Efficiency Video Coding (HEVC) is the forthcoming video coding standard that provides substantial bit rate reductions compared to its predecessors. In the HEVC standardization process, technologies such as picture partitioning, reference picture management, and parameter sets are categorized as “high-level syntax.” The design of the high-level syntax impacts the interface to systems and error resilience, and provides new functionalities. This paper presents an overview of the HEVC high-level syntax, including network abstraction layer unit headers, parameter sets, picture partitioning schemes, reference picture management, and supplemental enhancement information messages.
1.014957
0.00868
0.008483
0.008406
0.005028
0.003465
0.001591
0.000184
0.000059
0.000016
0
0
0
0
A multiscale framework for Compressive Sensing of video Compressive Sensing (CS) allows the highly efficient acquisition of many signals that could be difficult to capture or encode using conventional methods. From a relatively small number of random measurements, a high-dimensional signal can be recovered if it has a sparse or near-sparse representation in a basis known to the decoder. In this paper, we consider the application of CS to video signals in order to lessen the sensing and compression burdens in single- and multi-camera imaging systems. In standard video compression, motion compensation and estimation techniques have led to improved sparse representations that are more easily compressible; we adapt these techniques for the problem of CS recovery. Using a coarse-to-fine reconstruction algorithm, we alternate between the tasks of motion estimation and motion-compensated wavelet-domain signal recovery. We demonstrate that our algorithm allows the recovery of video sequences from fewer measurements than either frame-by-frame or inter-frame difference recovery methods.
Modified compressive sensing for real-time dynamic MR imaging In this work, we propose algorithms to recursively and causally reconstruct a sequence of natural images from a reduced number of linear projection measurements taken in a domain that is ¿incoherent¿ with respect to the image's sparsity basis (typically wavelet) and demonstrate their application in real-time MR image reconstruction. For a static version of the above problem, Compressed Sensing (CS) provides a provably exact and computationally efficient solution. But most existing solutions for the actual problem are either offline and non-causal or cannot compute an exact reconstruction (for truly sparse signal sequences), except using as many measurements as those needed for CS. The key idea of our proposed solution (modified-CS) is to design a modification of CS when a part of the support set is known (available from reconstructing the previous image). We demonstrate the exact reconstruction property of modified-CS on full-size image sequences using much fewer measurements than those required for CS. Greatly improved performance over existing work is demonstrated for approximately sparse signals or noisy measurements.
Compressive-projection principal component analysis. Principal component analysis (PCA) is often central to dimensionality reduction and compression in many applications, yet its data-dependent nature as a transform computed via expensive eigendecomposition often hinders its use in severely resource-constrained settings such as satellite-borne sensors. A process is presented that effectively shifts the computational burden of PCA from the resource-constrained encoder to a presumably more capable base-station decoder. The proposed approach, compressive-projection PCA (CPPCA), is driven by projections at the sensor onto lower-dimensional subspaces chosen at random, while the CPPCA decoder, given only these random projections, recovers not only the coefficients associated with the PCA transform, but also an approximation to the PCA transform basis itself. An analysis is presented that extends existing Rayleigh-Ritz theory to the special case of highly eccentric distributions; this analysis in turn motivates a reconstruction process at the CPPCA decoder that consists of a novel eigenvector reconstruction based on a convex-set optimization driven by Ritz vectors within the projected subspaces. As such, CPPCA constitutes a fundamental departure from traditional PCA in that it permits its excellent dimensionality-reduction and compression performance to be realized in an light-encoder/heavy-decoder system architecture. In experimental results, CPPCA outperforms a multiple-vector variant of compressed sensing for the reconstruction of hyperspectral data.
Joint Compressive Video Coding and Analysis Traditionally, video acquisition, coding and analysis have been designed and optimized as independent tasks. This has a negative impact in terms of consumed resources, as most of the raw information captured by conventional acquisition devices is discarded in the coding phase, while the analysis step only requires a few descriptors of salient video characteristics. Recent compressive sensing literature has partially broken this paradigm by proposing to integrate sensing and coding in a unified architecture composed by a light encoder and a more complex decoder, which exploits sparsity of the underlying signal for efficient recovery. However, a clear understanding of how to embed video analysis in this scheme is still missing. In this paper, we propose a joint compressive video coding and analysis scheme and, as a specific application example, we consider the problem of object tracking in video sequences. We show that, weaving together compressive sensing and the information computed by the analysis module, the bit-rate required to perform reconstruction and tracking of the foreground objects can be considerably reduced, with respect to a conventional disjoint approach that postpones the analysis after the video signal is recovered in the pixel domain. These findings suggest that considerable gains in performance can be potentially obtained in video analysis applications, provided that a joint analysis-aware design of acquisition, coding and signal recovery is carried out.
Joint reconstruction of compressed multi-view images This paper proposes a distributed representation algorithm for multi-view images that are jointly reconstructed at the decoder. Compressed versions of each image are first obtained independently with random projections. The multiple images are then jointly reconstructed by the decoder, under the assumption that the correlation between images can be represented by local geometric transformations. We build on the compressed sensing framework and formulate the joint reconstruction as a l2-l1 optimization problem. It tends to minimize the MSE distortion of the decoded images, under the constraint that these images have sparse and correlated representations over a structured dictionary of atoms. Simulation results with multi-view images demonstrate that our approach achieves better reconstruction results than independent decoding. Moreover, we show the advantage of structured dictionaries for capturing the geometrical correlation between multi-view images.
Compressive Acquisition of Dynamic Scenes Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models infeasible. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements, from which the image frames are then reconstructed. We exploit the low-dimensional dynamic parameters (the state sequence) and high-dimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the dynamic part of the scene at each instant and accumulates measurements over time to estimate the static parameters. This enables us to considerably lower the compressive measurement rate considerably. We validate our approach with a range of experiments including classification experiments that highlight the effectiveness of the proposed approach.
Single-Pixel Imaging via Compressive Sampling In this article, the authors present a new approach to building simpler, smaller, and cheaper digital cameras that can operate efficiently across a broader spectral range than conventional silicon-based cameras. The approach fuses a new camera architecture based on a digital micromirror device with the new mathematical theory and algorithms of compressive sampling.
Compressed Sensing. Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0<ples1. The N most important coefficients in that expansion allow reconstruction with lscr2 error O(N1/2-1p/). It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients. Moreover, a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing. The nonadaptive measurements have the character of "random" linear combinations of basis/frame elements. Our results use the notions of optimal recovery, of n-widths, and information-based complexity. We estimate the Gel'fand n-widths of lscrp balls in high-dimensional Euclidean space in the case 0<ples1, and give a criterion identifying near- optimal subspaces for Gel'fand n-widths. We show that "most" subspaces are near-optimal, and show that convex optimization (Basis Pursuit) is a near-optimal way to extract information derived from these near-optimal subspaces
Imaging via Compressive Sampling Image compression algorithms convert high-resolution images into a relatively small bit streams in effect turning a large digital data set into a substantially smaller one. This article introduces compressive sampling and recovery using convex programming.
Guaranteed passive balancing transformations for model order reduction The major concerns in state-of-the-art model reduction algorithms are: achieving accurate models of sufficiently small size, numerically stable and efficient generation of the models, and preservation of system properties such as passivity. Algorithms, such as PRIMA, generate guaranteed-passive models for systems with special internal structure, using numerically stable and efficient Krylov-subspace iterations. Truncated balanced realization (TBR) algorithms, as used to date in the design automation community, can achieve smaller models with better error control, but do not necessarily preserve passivity. In this paper, we show how to construct TBR-like methods that generate guaranteed passive reduced models and in addition are applicable to state-space systems with arbitrary internal structure.
Preconditioning Stochastic Galerkin Saddle Point Systems Mixed finite element discretizations of deterministic second-order elliptic PDEs lead to saddle point systems for which the study of iterative solvers and preconditioners is mature. Galerkin approximation of solutions of stochastic second-order elliptic PDEs, which couple standard mixed finite element discretizations in physical space with global polynomial approximation on a probability space, also give rise to linear systems with familiar saddle point structure. For stochastically nonlinear problems, the solution of such systems presents a serious computational challenge. The blocks are sums of Kronecker products of pairs of matrices associated with two distinct discretizations, and the systems are large, reflecting the curse of dimensionality inherent in most stochastic approximation schemes. Moreover, for the problems considered herein, the leading blocks of the saddle point matrices are block-dense, and the cost of a matrix vector product is nontrivial. We implement a stochastic Galerkin discretization for the steady-state diffusion problem written as a mixed first-order system. The diffusion coefficient is assumed to be a lognormal random field, approximated via a nonlinear function of a finite number of Gaussian random variables. We study the resulting saddle point systems and investigate the efficiency of block-diagonal preconditioners of Schur complement and augmented type for use with the minimal residual method (MINRES). By introducing so-called Kronecker product preconditioners, we improve the robustness of cheap, mean-based preconditioners with respect to the statistical properties of the stochastically nonlinear diffusion coefficients.
Sublinear time, measurement-optimal, sparse recovery for all An approximate sparse recovery system in l1 norm makes a small number of measurements of a noisy vector with at most k large entries and recovers those heavy hitters approximately. Formally, it consists of parameters N, k, ε, an m-by-N measurement matrix, φ, and a decoding algorithm, D. Given a vector, x, where xk denotes the optimal k-term approximation to x, the system approximates x by [EQUATION], which must satisfy [EQUATION] Among the goals in designing such systems are minimizing the number m of measurements and the runtime of the decoding algorithm, D. We consider the "forall" model, in which a single matrix φ, possibly "constructed" non-explicitly using the probabilistic method, is used for all signals x. Many previous papers have provided algorithms for this problem. But all such algorithms that use the optimal number m = O(k log(N/k)) of measurements require superlinear time Ω(N log(N/k)). In this paper, we give the first algorithm for this problem that uses the optimum number of measurements (up to constant factors) and runs in sublinear time o(N) when k is sufficiently less than N. Specifically, for any positive integer l, our approach uses time O(l5ε-3k(N/k)1/l) and uses m = O(l8ε-3k log(N/k)) measurements, with access to a data structure requiring space and preprocessing time O(lNk0.2/ε).
Utilizing redundancy for timing critical interconnect Conventionally, the topology of signal net routing is almost always restricted to Steiner trees, either unbuffered or buffered. However, introducing redundant paths into the topology (which leads to non-tree) may significantly improve timing performance as well as tolerance to open faults and variations. These advantages are particularly appealing for timing critical net routings in nanoscale VLSI designs where interconnect delay is a performance bottleneck and variation effects are increasingly remarkable. We propose Steiner network construction heuristics which can generate either tree or non-tree with different slack-wirelength tradeoff, and handle both long path and short path constraints. We also propose heuristics for simultaneous Steiner network construction and buffering, which may provide further improvement in slack and resistance to variations. Furthermore, incremental non-tree delay update techniques are developed to facilitate fast Steiner network evaluations. Extensive experiments in different scenarios show that our heuristics usually improve timing slack by hundreds of pico seconds compared to traditional approaches. When process variations are considered, our heuristics can significantly improve timing yield because of nominal slack improvement and delay variability reduction.
Multi-criteria decision making method based on possibility degree of interval type-2 fuzzy number This paper proposes a new approach based on possibility degree to solve multi-criteria decision making (MCDM) problems in which the criteria value takes the form of interval type-2 fuzzy number. First, a new expected value function is defined and an optimal model based on maximizing deviation method is constructed to obtain weight coefficients when criteria weight information is partially known. Then, the overall value of each alternative is calculated by the defined aggregation operators. Furthermore, a new possibility degree, which is proposed to overcome some drawbacks of the existing methods, is introduced for comparisons between the overall values of alternatives to construct a possibility degree matrix. Based on the constructed matrix, all of the alternatives are ranked according to the ranking vector derived from the matrix, and the best one is selected. Finally, the proposed method is applied to a case study on the overseas minerals investment for one of the largest multi-species nonferrous metals companies in China and the results demonstrate the feasibility of the method.
1.030476
0.021716
0.018799
0.017891
0.009167
0.003839
0.001783
0.000276
0.00007
0.000004
0
0
0
0
Terminological difficulties in fuzzy set theory—The case of “Intuitionistic Fuzzy Sets” This note points out a terminological clash between Atanassov's “intuitionistic fuzzy sets” and what is currently understood as intuitionistic logic. They differ both by their motivations and their underlying mathematical structure. Furthermore, Atanassov's construct is isomorphic to interval-valued fuzzy sets and other similar notions, even if their interpretive settings and motivation are quite different, the latter capturing the idea of ill-known membership grade, while the former starts from the idea of evaluating degrees of membership and non-membership independently. This paper is a plea for a clarification of terminology, based on mathematical resemblances and the comparison of motivations between “intuitionistic fuzzy sets” and other theories.
A new method for multiattribute decision making using interval-valued intuitionistic fuzzy values.
Mathematical-programming approach to matrix games with payoffs represented by atanassovs interval-valued intuitionistic fuzzy sets The purpose of this paper is to develop the concept and mathematical-programming methodology of matrix games with payoffs represented by Atanassovs interval-valued intuitionistic fuzzy (IVIF) sets. In this methodology, the concept of solutions of matrix games with payoffs represented by Atanassovs IVIF sets is defined, and some important properties are studied using multiobjective-programming and duality-programming theory. It is proven that each matrix game with payoffs represented by Atanassovs IVIF sets has a solution, which can be obtained through solving a pair of auxiliary linear/nonlinear-programming models derived from a pair of nonlinear biobjective interval-programming models. Validity and applicability of the proposed methodology are illustrated with a numerical example. © 2006 IEEE.
Some aspects of intuitionistic fuzzy sets We first discuss the significant role that duality plays in many aggregation operations involving intuitionistic fuzzy subsets. We then consider the extension to intuitionistic fuzzy subsets of a number of ideas from standard fuzzy subsets. In particular we look at the measure of specificity. We also look at the problem of alternative selection when decision criteria satisfaction is expressed using intuitionistic fuzzy subsets. We introduce a decision paradigm called the method of least commitment. We briefly look at the problem of defuzzification of intuitionistic fuzzy subsets.
A Bilattice-Based Framework For Handling Graded Truth And Imprecision We present a family of algebraic structures, called rectangular bilattices, which serve as a natural accommodation and powerful generalization to both intuitionistic fuzzy sets (IFSs) and interval-valued fuzzy sets (IVFSs). These structures are useful on one hand to clarify the exact nature of the relationship between the above two common extensions of fuzzy sets, and on the other hand provide an intuitively attractive framework for the representation of uncertain and potentially conflicting information. We also provide these structures with adequately defined graded versions of the basic logical connectives, and study their properties and relationship. Application potential and intuitive appeal of the proposed framework are illustrated in the context of preference modeling.
Closeness coefficient based nonlinear programming method for interval-valued intuitionistic fuzzy multiattribute decision making with incomplete preference information The aim of this paper is to develop a closeness coefficient based nonlinear programming method for solving multiattribute decision making problems in which ratings of alternatives on attributes are expressed using interval-valued intuitionistic fuzzy (IVIF) sets and preference information on attributes is incomplete. In this methodology, nonlinear programming models are constructed on the concept of the closeness coefficient, which is defined as a ratio of the square of the weighted Euclidean distance between an alternative and the IVIF negative ideal solution (IVIFNIS) to the sum of the squares of the weighted Euclidean distances between the alternative and the IVIF positive ideal solution (IVIFPIS) as well as the IVIFNIS. Simpler nonlinear programming models are deduced to calculate closeness intuitionistic fuzzy sets of alternatives to the IVIFPIS, which are used to estimate the optimal degrees of membership and hereby generate ranking order of the alternatives. The derived auxiliary nonlinear programming models are shown to be flexible with different information structures and decision environments. The proposed method is validated and compared with other methods. A real example is examined to demonstrate applicability of the proposed method in this paper.
Type-2 Fuzzy Logic: Theory and Applications Type-2 fuzzy sets are used for modeling uncertainty and imprecision in a better way. These type-2 fuzzy sets were originally presented by Zadeh in 1975 and are essentially "fuzzy fuzzy" sets where the fuzzy degree of membership is a type-1 fuzzy set. The new concepts were introduced by Mendel and Liang allowing the characterization of a type-2 fuzzy set with a superior membership function and an inferior membership function; these two functions can be represented each one by a type-1 fuzzy set membership function. The interval between these two functions represents the footprint of uncertainty (FOU), which is used to characterize a type-2 fuzzy set.
Fuzzy Reasoning Based On The Extension Principle According to the operation of decomposition (also known as representation theorem) (Negoita CV, Ralescu, DA. Kybernetes 1975;4:169-174) in fuzzy set theory, the whole fuzziness of an object can be characterized by a sequence of local crisp properties of that object. Hence, any fuzzy reasoning could also be implemented by using a similar idea, i.e., a sequence of precise reasoning. More precisely, we could translate a fuzzy relation "lf A then B" of the Generalized Modus Ponens Rule (the most common and widely used interpretation of a fuzzy rule, A, B, are fuzzy sets in a universe of discourse X, and of discourse Y, respectively) into a corresponding precise relation between a subset of P(X) and a subset of P(Y), and then extend this corresponding precise relation to two kinds of transformations between all L-type fuzzy subsets of X and those of Y by using Zadeh's extension principle, where L denotes a complete lattice. In this way, we provide an alternative approach to the existing compositional rule of inference, which performs fuzzy reasoning based on the extension principle. The approach does not depend on the choice of fuzzy implication operator nor on the choice of a t-norm. The detailed reasoning methods, applied in particular to the Generalized Modus Ponens and the Generalized Modus Tollens, are established and their properties are further investigated in this paper. (C) 2001 John Wiley & Sons, Inc.
An Interval Type-2 Fuzzy Logic System To Translate Between Emotion-Related Vocabularies This paper describes a novel experiment that demonstrates the feasiblity of a fuzzy logic (FL) representation of emotion-related words used to translate between different emotional vocabularies. Type-2 fuzzy sets were encoded using input from web-based surveys that prompted users with emotional words and asked them to enter an interval using a double slider. The similarity of the encoded fuzzy sets was computed and it was shown that a reliable [napping can be made between a large vocabulary of emotional words and a smaller vocabulary of words naming seven emotion categories. Though the mapping results are comparable to Euclidian distance in the valence/activation/dominance space, the FL representation has several benefits that are discussed.
Perceptual reasoning for perceptual computing: a similarity-based approach Perceptual reasoning (PR) is an approximate reasoning method that can be used as a computing-with-words (CWW) engine in perceptual computing. There can be different approaches to implement PR, e.g., firing-interval-based PR (FI-PR), which has been proposed in J. M. Mendel and D. Wu, IEEE Trans. Fuzzy Syst., vol. 16, no. 6, pp. 1550-1564, Dec. 2008 and similarity-based PR (SPR), which is proposed in this paper. Both approaches satisfy the requirement on a CWW engine that the result of combining fired rules should lead to a footprint of uncertainty (FOU) that resembles the three kinds of FOUs in a CWW codebook. A comparative study shows that S-PR leads to output FOUs that resemble word FOUs, which are obtained from subject data, much more closely than FI-PR; hence, S-PR is a better choice for a CWW engine than FI-PR.
A survey on fuzzy relational equations, part I: classification and solvability Fuzzy relational equations play an important role in fuzzy set theory and fuzzy logic systems, from both of the theoretical and practical viewpoints. The notion of fuzzy relational equations is associated with the concept of "composition of binary relations." In this survey paper, fuzzy relational equations are studied in a general lattice-theoretic framework and classified into two basic categories according to the duality between the involved composite operations. Necessary and sufficient conditions for the solvability of fuzzy relational equations are discussed and solution sets are characterized by means of a root or crown system under some specific assumptions.
On proactive perfectly secure message transmission This paper studies the interplay of network connectivity and perfectly secure message transmission under the corrupting influence of a Byzantine mobile adversary that may move from player to player but can corrupt no more than t players at any given time. It is known that, in the stationary adversary model where the adversary corrupts the same set of t players throughout the protocol, perfectly secure communication among any pair of players is possible if and only if the underlying synchronous network is (2t + 1)-connected. Surprisingly, we show that (2t + 1)-connectivity is sufficient (and of course, necessary) even in the proactive (mobile) setting where the adversary is allowed to corrupt different sets of t players in different rounds of the protocol. In other words, adversarial mobility has no effect on the possibility of secure communication. Towards this, we use the notion of a Communication Graph, which is useful in modelling scenarios with adversarial mobility. We also show that protocols for reliable and secure communication proposed in [15] can be modified to tolerate the mobile adversary. Further these protocols are round-optimal if the underlying network is a collection of disjoint paths from the sender S to receiver R.
NANOLAB: A Tool for Evaluating Reliability of Defect-Tolerant Nano Architectures As silicon manufacturing technology reaches the nanoscale, architectural designs need to accommodate the uncertainty inherent at such scales. These uncertainties are germane in the miniscule dimension of the devices, quantum physical effects, reduced noise margins, system energy levels reaching computing thermal limits, manufacturing defects, aging and many other factors. Defect tolerant architectures and their reliability measures will gain importance for logic and micro-architecture designs based on nano-scale substrates. Recently, Markov Random Field (MRF) has been proposed as a model of computation for nanoscale logic gates. In this paper, we take this approach further by automating this computational scheme and a Belief Propagation algorithm. We have developed MATLAB based libraries and toolset for fundamental logic gates that can compute output probability distributions and entropies for specified input distributions. Our tool eases evaluation of reliability measures of combinational logic blocks. The effectiveness of this automation is illustrated in this paper by automatically deriving various reliability results for defect-tolerant architectures, such as Triple Modular Redundancy (TMR), Cascaded Triple Modular Redundancy (CTMR) and multi-stage iterations of these. These results are used to analyze trade-offs between reliability and redundancy for these architectural configurations.
Inclusion of Chemical-Mechanical Polishing Variation in Statistical Static Timing Analysis Technology trends show the importance of modeling process variation in static timing analysis. With the advent of statistical static timing analysis (SSTA), multiple independent sources of variation can be modeled. This paper proposes a methodology for modeling metal interconnect process variation in SSTA. The developed methodology is applied in this study to investigate metal variation in SSTA resulting from chemical-mechanical polishing (CMP). Using our statistical methodology, we show that CMP variation has a smaller impact on chip performance as compared to other factors impacting metal process variation.
1.006786
0.009303
0.0074
0.005903
0.005263
0.00217
0.000878
0.000273
0.000075
0.000027
0.000001
0
0
0
Sharp thresholds for high-dimensional and noisy sparsity recovery using l1-constrained quadratic programming (Lasso) The problem of consistently estimating the sparsity pattern of a vector β* ∈ RP based on observations contaminated by noise arises in various contexts, including signal denoising, sparse approximation, compressed sensing, and model selection. We analyze the behavior of l1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern. Our main result is to establish precise conditions on the problern dimension p, the number k of nonzero elements in β*, and the number of observations n that are necessary and sufficient for sparsity pattern recovery using the Lasso. We first analyze the case of observations made using deterministic design matrices and sub-Gaussian additive noise, and provide sufficient conditions for support recovery and l∞-error bounds, as well as results showing the necessity of incoherence and bounds on the minimum value. We then turn to the case of random designs, in which each row of the design is drawn from a N(0, Σ) ensemble. For a broad class of Gaussian ensembles satisfying mutual incoherence conditions, we compute explicit values of thresholds 0 l(Σ) ≤ θu(Σ) 0, if n 2(θu + δ)k log(p - k), then the Lasso succeeds in recovering the sparsity pattern with probability converging to one for large problems, whereas for n l - δ)k log(p - k), then the probability of successful recovery converges to zero. For the special case of the uniform Gaussian ensemble (Σ = I p×p), we show that θl = θu = 1, so that the precise threshold n = 2k log (p - k) is exactly determined.
Simultaneously Sparse Solutions to Linear Inverse Problems with Multiple System Matrices and a Single Observation Vector A problem that arises in slice-selective magnetic resonance imaging (MRI) radio-frequency (RF) excitation pulse design is abstracted as a novel linear inverse problem with a simultaneous sparsity constraint. Multiple unknown signal vectors are to be determined, where each passes through a different system matrix and the results are added to yield a single observation vector. Given the matrices and lone observation, the objective is to find a simultaneously sparse set of unknown vectors that approximately solves the system. We refer to this as the multiple-system single-output (MSSO) simultaneous sparse approximation problem. This manuscript contrasts the MSSO problem with other simultaneous sparsity problems and conducts an initial exploration of algorithms with which to solve it. Greedy algorithms and techniques based on convex relaxation are derived and compared empirically. Experiments involve sparsity pattern recovery in noiseless and noisy settings and MRI RF pulse design.
Thresholded Basis Pursuit: LP Algorithm for Order-Wise Optimal Support Recovery for Sparse and Approximately Sparse Signals From Noisy Random Measurements In this paper we present a linear programming solution for sign pattern recovery of a sparse signal from noisy random projections of the signal. We consider two types of noise models, input noise, where noise enters before the random projection; and output noise, where noise enters after the random projection. Sign pattern recovery involves the estimation of sign pattern of a sparse signal. Our idea is to pretend that no noise exists and solve the noiseless $\ell_1$ problem, namely, $\min \|\beta\|_1 ~ s.t. ~ y=G \beta$ and quantizing the resulting solution. We show that the quantized solution perfectly reconstructs the sign pattern of a sufficiently sparse signal. Specifically, we show that the sign pattern of an arbitrary k-sparse, n-dimensional signal $x$ can be recovered with $SNR=\Omega(\log n)$ and measurements scaling as $m= \Omega(k \log{n/k})$ for all sparsity levels $k$ satisfying $0< k \leq \alpha n$, where $\alpha$ is a sufficiently small positive constant. Surprisingly, this bound matches the optimal \emph{Max-Likelihood} performance bounds in terms of $SNR$, required number of measurements, and admissible sparsity level in an order-wise sense. In contrast to our results, previous results based on LASSO and Max-Correlation techniques either assume significantly larger $SNR$, sublinear sparsity levels or restrictive assumptions on signal sets. Our proof technique is based on noisy perturbation of the noiseless $\ell_1$ problem, in that, we estimate the maximum admissible noise level before sign pattern recovery fails.
Block-sparsity: Coherence and efficient recovery We consider compressed sensing of block-sparse signals, i.e., sparse signals that have nonzero coefficients occurring in clusters. Based on an uncertainty relation for block-sparse signals, we define a block-coherence measure and show that a block-version of the orthogonal matching pursuit algorithm recovers block k-sparse signals in no more than k steps if the block-coherence is sufficiently small. The same condition on block-sparsity is shown to guarantee successful recovery through a mixed ℓ2/ℓ1 optimization approach. The significance of the results lies in the fact that making explicit use of block-sparsity can yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.
Information-Theoretic Limits on Sparse Signal Recovery: Dense versus Sparse Measurement Matrices We study the information-theoretic limits of exactly recovering the support set of a sparse signal, using noisy projections defined by various classes of measurement matrices. Our analysis is high-dimensional in nature, in which the number of observations n, the ambient signal dimension p, and the signal sparsity k are all allowed to tend to infinity in a general manner. This paper makes two novel contributions. First, we provide sharper necessary conditions for exact support recovery using general (including non-Gaussian) dense measurement matrices. Combined with previously known sufficient conditions, this result yields sharp characterizations of when the optimal decoder can recover a signal for various scalings of the signal sparsity k and sample size n, including the important special case of linear sparsity (k = ¿(p)) using a linear scaling of observations (n = ¿(p)). Our second contribution is to prove necessary conditions on the number of observations n required for asymptotically reliable recovery using a class of ¿-sparsified measurement matrices, where the measurement sparsity parameter ¿(n, p, k) ¿ (0,1] corresponds to the fraction of nonzero entries per row. Our analysis allows general scaling of the quadruplet (n, p, k, ¿) , and reveals three different regimes, corresponding to whether measurement sparsity has no asymptotic effect, a minor effect, or a dramatic effect on the information-theoretic limits of the subset recovery problem.
Information-Theoretic Bounds On Sparsity Recovery In The High-Dimensional And Noisy Setting The problem of recovering the sparsity pattern of a fixed but unknown vector beta* epsilon R-p based on a set of n noisy observations arises in a variety of settings, including subset selection in regression, graphical model selection, signal denoising, compressive sensing, and constructive approximation. Of interest are conditions on the model dimension p, the sparsity index s (number of non-zero entries in beta*), and the number of observations n that are necessary and/or sufficient to ensure asymptotically perfect recovery of the sparsity pattern. This paper focuses on the information-theoretic limits of sparsity recovery: in particular, for a noisy linear observation model based on measurement vectors drawn from the standard Gaussian ensemble, we derive both a set of sufficient conditions for asymptotically perfect recovery using the optimal decoder, as well as a set of necessary conditions that any decoder must satisfy for perfect recovery. This analysis of optimal decoding limits complements our previous work [19] on thresholds for the behavior of l(1)-constrained quadratic programming for Gaussian measurement ensembles.
Joint Source–Channel Communication for Distributed Estimation in Sensor Networks Power and bandwidth are scarce resources in dense wireless sensor networks and it is widely recognized that joint optimization of the operations of sensing, processing and communication can result in significant savings in the use of network resources. In this paper, a distributed joint source-channel communication architecture is proposed for energy-efficient estimation of sensor field data at a distant destination and the corresponding relationships between power, distortion, and latency are analyzed as a function of number of sensor nodes. The approach is applicable to a broad class of sensed signal fields and is based on distributed computation of appropriately chosen projections of sensor data at the destination - phase-coherent transmissions from the sensor nodes enable exploitation of the distributed beamforming gain for energy efficiency. Random projections are used when little or no prior knowledge is available about the signal field. Distinct features of the proposed scheme include: (1) processing and communication are combined into one distributed projection operation; (2) it virtually eliminates the need for in-network processing and communication; (3) given sufficient prior knowledge about the sensed data, consistent estimation is possible with increasing sensor density even with vanishing total network power; and (4) consistent signal estimation is possible with power and latency requirements growing at most sublinearly with the number of sensor nodes even when little or no prior knowledge about the sensed data is assumed at the sensor nodes.
SPARLS: A Low Complexity Recursive $\mathcal{L}_1$-Regularized Least Squares Algorithm We develop a Recursive $\mathcal{L}_1$-Regularized Least Squares (SPARLS) algorithm for the estimation of a sparse tap-weight vector in the adaptive filtering setting. The SPARLS algorithm exploits noisy observations of the tap-weight vector output stream and produces its estimate using an Expectation-Maximization type algorithm. Simulation studies in the context of channel estimation, employing multi-path wireless channels, show that the SPARLS algorithm has significant improvement over the conventional widely-used Recursive Least Squares (RLS) algorithm, in terms of both mean squared error (MSE) and computational complexity.
A Theory for Sampling Signals From a Union of Subspaces One of the fundamental assumptions in traditional sampling theorems is that the signals to be sampled come from a single vector space (e.g., bandlimited functions). However, in many cases of practical interest the sampled signals actually live in a union of subspaces. Examples include piecewise polynomials, sparse representations, nonuniform splines, signals with unknown spectral support, overlapping echoes with unknown delay and amplitude, and so on. For these signals, traditional sampling schemes based on the single subspace assumption can be either inapplicable or highly inefficient. In this paper, we study a general sampling framework where sampled signals come from a known union of subspaces and the sampling operator is linear. Geometrically, the sampling operator can be viewed as projecting sampled signals into a lower dimensional space, while still preserving all the information. We derive necessary and sufficient conditions for invertible and stable sampling operators in this framework and show that these conditions are applicable in many cases. Furthermore, we find the minimum sampling requirements for several classes of signals, which indicates the power of the framework. The results in this paper can serve as a guideline for designing new algorithms for various applications in signal processing and inverse problems.
Widths of embeddings in function spaces We study the approximation, Gelfand and Kolmogorov numbers of embeddings in function spaces of Besov and Triebel-Lizorkin type. Our aim here is to provide sharp estimates in several cases left open in the literature and give a complete overview of the known results. We also add some historical remarks.
Fuzzy trust evaluation and credibility development in multi-agent systems E-commerce markets can increase their efficiency through the usage of intelligent agents which negotiate and execute contracts on behalf of their owners. The measurement and computation of trust to secure interactions between autonomous agents is crucial for the success of automated e-commerce markets. Building a knowledge sharing network among peer agents helps to overcome trust-related boundaries in an environment where least human intervention is desired. Nevertheless, a risk management model which allows individual customisation to meet the different security needs of agent-owners is vital. The calculation and measurement of trust in unsupervised virtual communities like multi-agent environments involves complex aspects such as credibility rating for opinions delivered by peer agents, or the assessment of past experiences with the peer node one wishes to interact with. The deployment of suitable algorithms and models imitating human reasoning can help to solve these problems. This paper proposes not only a customisable trust evaluation model based on fuzzy logic but also demonstrates the integration of post-interaction processes like business interaction reviews and credibility adjustment. Fuzzy logic provides a natural framework to deal with uncertainty and the tolerance of imprecise data inputs to fuzzy-based systems makes fuzzy reasoning especially attractive for the subjective tasks of trust evaluation, business-interaction review and credibility adjustment.
Membership maximization prioritization methods for fuzzy analytic hierarchy process Fuzzy analytic hierarchy process (FAHP) has increasingly been applied in many areas. Extent analysis method is the popular tool for prioritization in FAHP, although significant technical errors are identified in this study. With addressing the errors, this research proposes membership maximization prioritization methods (MMPMs) using different membership functions as the novel solutions. As a lack of research about effectiveness measurement on the crisp/fuzzy prioritization methods, this study proposes membership fitness index to evaluate the effectiveness of the prioritization methods. Comparisons with the other popular fuzzy/crisp prioritization methods including modified fuzzy preference programming, Direct least squares, and Eigen value are conducted and analyses indicate that MMPMs lead to much more reliable result in view of membership fitness index. A numerical example demonstrates the usability of MMPMs for FAHP, and thus MMPMs can effectively be applied to various decision analysis applications.
Depth Reconstruction Filter and Down/Up Sampling for Depth Coding in 3-D Video A depth image represents three-dimensional (3-D) scene information and is commonly used for depth image-based rendering (DIBR) to support 3-D video and free-viewpoint video applications. The virtual view is generally rendered by the DIBR technique and its quality depends highly on the quality of depth image. Thus, efficient depth coding is crucial to realize the 3-D video system. In this letter, w...
Split Bregman iterative algorithm for sparse reconstruction of electrical impedance tomography In this paper, we present an evaluation of the use of split Bregman iterative algorithm for the L"1-norm regularized inverse problem of electrical impedance tomography. Simulations are performed to validate that our algorithm is competitive in terms of the imaging quality and computational speed in comparison with several state-of-the-art algorithms. Results also indicate that in contrast to the conventional L"2-norm regularization method and total variation (TV) regularization method, the L"1-norm regularization method can sharpen the edges and is more robust against data noises.
1.015577
0.016951
0.01185
0.007356
0.006178
0.00409
0.001082
0.000111
0.000013
0
0
0
0
0
Hierarchical semi-numeric method for pairwise fuzzy group decision making. Gradual improvements to a single-level semi-numeric method, i.e., linguistic labels preference representation by fuzzy sets computation for pairwise fuzzy group decision making are summarized. The method is extended to solve multiple criteria hierarchical structure pairwise fuzzy group decision-making problems. The problems are hierarchically structured into focus, criteria, and alternatives. Decision makers express their evaluations of criteria and alternatives based on each criterion by using linguistic labels. The labels are converted into and processed in triangular fuzzy numbers (TFNs). Evaluations of criteria yield relative criteria weights. Evaluations of the alternatives, based on each criterion, yield a degree of preference for each alternative or a degree of satisfaction for each preference value. By using a neat ordered weighted average (OWA) or a fuzzy weighted average operator, solutions obtained based on each criterion are aggregated into final solutions. The hierarchical semi-numeric method is suitable for solving a larger and more complex pairwise fuzzy group decision-making problem. The proposed method has been verified and applied to solve some real cases and is compared to Saaty's (1996) analytic hierarchy process (AHP) method.
Linguistic probabilities: theory and application Over the past two decades a number of different approaches to “fuzzy probabilities” have been presented. The use of the same term masks fundamental differences. This paper surveys these different theories, contrasting and relating them to one another. Problems with these existing approaches are noted and a theory of “linguistic probabilities” is developed, which seeks to retain the underlying insights of existing work whilst remedying its technical defects. It is shown how the axiomatic theory of linguistic probabilities can be used to develop linguistic Bayesian networks which have a wide range of practical applications. To illustrate this a detailed and realistic example in the domain of forensic statistics is presented.
Intelligent multi-criteria fuzzy group decision-making for situation assessments Organizational decisions and situation assessment are often made in groups, and decision and assessment processes involve various uncertain factors. To increase efficiently group decision-making, this study presents a new rational---political model as a systematic means of supporting group decision-making in an uncertain environment. The model takes advantage of both rational and political models and can handle inconsistent assessment, incomplete information and inaccurate opinions in deriving the best solution for the group decision under a sequential framework. The model particularly identifies three uncertain factors involved in a group decision-making process: decision makers' roles, preferences for alternatives, and judgments for assessment-criteria. Based on this model, an intelligent multi-criteria fuzzy group decision-making method is proposed to deal with the three uncertain factors described by linguistic terms. The proposed method uses general fuzzy numbers and aggregates these factors into a group satisfactory decision that is in a most acceptable degree of the group. Inference rules are particularly introduced into the method for checking the consistence of individual preferences. Finally, a real case-study on a business situation assessment is illustrated by the proposed method.
An adaptive consensus support model for group decision-making problems in a multigranular fuzzy linguistic context Different consensus models for group decision-making (GDM) problems have been proposed in the literature. However, all of them consider the consensus reaching process a rigid or inflexible one because its behavior remains fixed in all rounds of the consensus process. The aim of this paper is to improve the consensus reaching process in GDM problems defined in multigranular linguistic contexts, i.e., by using linguistic term sets with different cardinality to represent experts' preferences. To do that, we propose an adaptive consensus support system model for this type of decision-making problem, i.e., a process that adapts its behavior to the agreement achieved in each round. This adaptive model increases the convergence toward the consensus and, therefore, reduces the number of rounds to reach it.
Computing With Words In Decision Support Systems: An Overview On Models And Applications Decision making is inherent to mankind, as human beings daily face situations in which they should choose among different alternatives by means of reasoning and mental processes. Many of these decision problems are under uncertain environments with vague and imprecise information. This type of information is usually modelled by linguistic information because of the common use of language by the experts involved in the given decision situations, originating linguistic decision making. The use of linguistic information in decision making demands processes of Computing with Words to solve the related decision problems. Different methodologies and approaches have been proposed to accomplish such processes in an accurate and interpretable way. The good performance of linguistic computing dealing with uncertainty has caused a spread use of it in different types of decision based applications. This paper overviews the more significant and extended linguistic computing models due to its key role in linguistic decision making and a wide range of the most recent applications of linguistic decision support models.
An approach for combining linguistic and numerical information based on the 2-tuple fuzzy linguistic representation model in decision-making In this paper we develop a procedure for combining numerical and linguistic informationwithout loss of information in the transformation processes between numerical and linguisticinformation, taking as the base the 2-tuple fuzzy linguistic representation model. Weshall analyze the conditions to impose the linguistic term set in order to ensure that thecombination procedure does not produce any loss of information. Afterwards the aggregationprocess will be applied to a decision...
The impact of fuzziness in social choice paradoxes Since Arrow's main theorem showed the impossibility of a rational procedure in group decision making, many variations in restrictions and objectives have been introduced in order to find out the limits of such a negative result. But so far all those results are often presented as a proof of the great expected difficulties we always shall find pursuing a joint group decision from different individual opinions, if we pursue rational and ethical procedures. In this paper we shall review some of the alternative approaches fuzzy sets theory allows, showing among other things that the main assumption of Arrow's model, not being made explicit in his famous theorem, was its underlying binary logic (a crisp definition is implied in preferences, consistency, liberty, equality, consensus and every concept or piece of information). Moreover, we shall also point out that focusing the problem on the choice issue can be also misleading, at least when dealing with human behaviour.
Group decision-making model using fuzzy multiple attributes analysis for the evaluation of advanced manufacturing technology Selection of advanced manufacturing technology is important for improving manufacturing system competitiveness. This study builds a group decision-making model using fuzzy multiple attributes analysis to evaluate the suitability of manufacturing technology. Since numerous attributes have been considered in evaluating the manufacturing technology suitability, most information available in this stage is subjective and imprecise, and fuzzy sets theory provides a mathematical framework for modeling imprecision and vagueness. The proposed approach involved developing a fusion method of fuzzy information, which was assessed using both linguistic and numerical scales. In addition, an interactive decision analysis is developed to make a consistent decision. When evaluating the suitability of manufacturing technology, it may be necessary to improve upon the technology, and naturally advanced manufacturing technology is seen as the best direction for improvement. The flexible manufacturing system adopted in the Taiwanese bicycle industry is used in this study to illustrate the computational process of the proposed method. The results of this study are more objective and unbiased, owing to being generated by a group of decision-makers.
A Method Based on OWA Operator and Distance Measures for Multiple Attribute Decision Making with 2-Tuple Linguistic Information In this paper we develop a new method for 2-tuple linguistic multiple attribute decision making, namely the 2-tuple linguistic generalized ordered weighted averaging distance (2LGOWAD) operator. This operator is an extension of the OWA operator that utilizes generalized means, distance measures and uncertain information represented as 2-tuple linguistic variables. By using 2LGOWAD, it is possible to obtain a wide range of 2-tuple linguistic aggregation distance operators such as the 2-tuple linguistic maximum distance, the 2-tuple linguistic minimum distance, the 2-tuple linguistic normalized Hamming distance (2LNHD), the 2-tuple linguistic weighted Hamming distance (2LWHD), the 2-tuple linguistic normalized Euclidean distance (2LNED), the 2-tuple linguistic weighted Euclidean distance (2LWED), the 2-tuple linguistic ordered weighted averaging distance (2LOWAD) operator and the 2-tuple linguistic Euclidean ordered weighted averaging distance (2LEOWAD) operator. We study some of its main properties, and we further generalize the 2LGOWAD operator using quasi-arithmetic means. The result is the Quasi-2LOWAD operator. Finally we present an application of the developed operators to decision-making regarding the selection of investment strategies.
Fundamentals Of Clinical Methodology: 2. Etiology The concept of etiology is analyzed and the possibilities and limitations of deterministic, probabilistic, and fuzzy etiology are explored. Different kinds of formal structures for the relation of causation are introduced which enable us to explicate the notion of cause on qualitative, comparative, and quantitative levels. The conceptual framework developed is an approach to a theory of causality that may be useful in etiologic research, in building nosological systems, and in differential diagnosis, therapeutic decision-making, and controlled clinical trials. The bearings of the theory are exemplified by examining the current Chlamydia pneumoniae hypothesis on the incidence of myocardial infarction. (C) 1998 Elsevier Science B.V. All rights reserved.
Lattices of convex normal functions The algebra of truth values of type-2 fuzzy sets is the set of all functions from the unit interval into itself, with operations defined in terms of certain convolutions of these functions with respect to pointwise max and min. This algebra has been studied rather extensively, both from a theoretical and from a practical point of view. It has a number of interesting subalgebras, and this paper is about the subalgebra of all convex normal functions, and closely related ones. These particular algebras are De Morgan algebras, and our concern is principally with their completeness as lattices. A special feature of our treatment is a representation of these algebras as monotone functions with pointwise order, making the operations more intuitive.
Defect tolerance at the end of the roadmap As feature sizes shrink closer to single digit nanometer dimensions, defect tolerance will become increasingly important. This is true whether the chips are manufactured using top-down methods, such as photolithography, or bottom-up assembly processes such as Chemically Assembled Electronic Nanotechnology (CAEN). In this chapter, we examine the consequences of this increased rate of defects, and describe a defect tolerance methodology centered around reconfigurable devices, a scalable testing method, and dynamic place-and-route. We summarize some of our own results in this area as well as those of others, and enumerate some future research directions required to make nanometer-scale computing a reality.
Efficient face candidates selector for face detection In this paper an efficient face candidates selector is proposed for face detection tasks in still gray level images. The proposed method acts as a selective attentional mechanism. Eye-analogue segments at a given scale are discovered by finding regions which are roughly as large as real eyes and are darker than their neighborhoods. Then a pair of eye-analogue segments are hypothesized to be eyes in a face and combined into a face candidate if their placement is consistent with the anthropological characteristic of human eyes. The proposed method is robust in that it can deal with illumination changes and moderate rotations. A subset of the FERET data set and the BioID face database are used to evaluate the proposed method. The proposed face candidates selector is successful in 98.75% and 98.6% cases, respectively.
A game-theoretic multipath routing for video-streaming services over Mobile Ad Hoc Networks The number of portable devices capable of maintaining wireless communications has increased considerably in the last decade. Such mobile nodes may form a spontaneous self-configured network connected by wireless links to constitute a Mobile Ad Hoc Network (MANET). As the number of mobile end users grows the demand of multimedia services, such as video-streaming, in such networks is envisioned to increase as well. One of the most appropriate video coding technique for MANETs is layered MPEG-2 VBR, which used with a proper multipath routing scheme improves the distribution of video streams. In this article we introduce a proposal called g-MMDSR (game theoretic-Multipath Multimedia Dynamic Source Routing), a cross-layer multipath routing protocol which includes a game theoretic approach to achieve a dynamic selection of the forwarding paths. The proposal seeks to improve the own benefits of the users whilst using the common scarce resources efficiently. It takes into account the importance of the video frames in the decoding process, which outperforms the quality of the received video. Our scheme has proved to enhance the performance of the framework and the experience of the end users. Simulations have been carried out to show the benefits of our proposal under different situations where high interfering traffic and mobility of the nodes are present.
1.115217
0.046282
0.030908
0.010394
0.008307
0.002414
0.000668
0.000214
0.000061
0.000011
0
0
0
0
Overview of the Stereo and Multiview Video Coding Extensions of the H.264/MPEG-4 AVC Standard Significant improvements in video compression capability have been demonstrated with the introduction of the H.264/MPEG-4 advanced video coding (AVC) standard. Since developing this standard, the Joint Video Team of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) has also standardized an extension of that technology that is referred to as multiview video coding (MVC). MVC provides a compact representation for multiple views of a video scene, such as multiple synchronized video cameras. Stereo-paired video for 3-D viewing is an important special case of MVC. The standard enables inter-view prediction to improve compression capability, as well as supporting ordinary temporal and spatial prediction. It also supports backward compatibility with existing legacy systems by structuring the MVC bitstream to include a compatible “base view.” Each other view is encoded at the same picture resolution as the base view. In recognition of its high-quality encoding capability and support for backward compatibility, the stereo high profile of the MVC extension was selected by the Blu-Ray Disc Association as the coding format for 3-D video with high-definition resolution. This paper provides an overview of the algorithmic design used for extending H.264/MPEG-4 AVC towards MVC. The basic approach of MVC for enabling inter-view prediction and view scalability in the context of H.264/MPEG-4 AVC is reviewed. Related supplemental enhancement information (SEI) metadata is also described. Various “frame compatible” approaches for support of stereo-view video as an alternative to MVC are also discussed. A summary of the coding performance achieved by MVC for both stereo- and multiview video is also provided. Future directions and challenges related to 3-D video are also briefly discussed.
Video Transport over Heterogeneous Networks Using SCTP and DCCP As the internet continues to grow and mature, transmission of multimedia content is expected to increase and comprise a large portion of overall data traffic. The internet is becoming increasingly heterogeneous with the advent and growth of diverse wireless access networks such as WiFi, 3G Cellular and WiMax. The provision of quality of service (QoS) for multimedia transport such as video traffic over such heterogeneous networks is complex and challenging. ne quality of video transport depends on many factors; among the more important are network condition and transport protocol. Traditional transport protocols such as UDP/TCP lack the functional requirements to meet the QoS requirements of today's multimedia applications. Therefore, a number of improved transport protocols are being developed. SCTP and DCCP fall into this category. In this paper, our focus has been on evaluating SCTP and DCCP performance for MPEG4 video transport over heterogeneous (wired cum wireless) networks. The performance metrics used for this evaluation include throughput, delay and jitter. We also evaluated these measures for UDP in order to have a basis for comparison. Extensive simulations have been performed using a network simulator for video downloading and uploading. In this scenario, DCCP achieves higher throughput, with less delay and jitter than SCTP and UDP. Based on the results obtained in this study, we find that DCCP can better meet the QoS requirements for the transport of video streaming traffic.
DIRECT Mode Early Decision Optimization Based on Rate Distortion Cost Property and Inter-view Correlation. In this paper, an Efficient DIRECT Mode Early Decision (EDMED) algorithm is proposed for low complexity multiview video coding. Two phases are included in the proposed EDMED: 1) early decision of DIRECT mode is made before doing time-consuming motion estimation/disparity estimation, where adaptive rate-distortion (RD) cost threshold, inter-view DIRECT mode correlation and coded block pattern are jointly utilized; and 2) false rejected DIRECT mode macroblocks of the first phase are then successfully terminated based on weighted RD cost comparison between 16×16 and DIRECT modes for further complexity reduction. Experimental results show that the proposed EDMED algorithm achieves 11.76% more complexity reduction than that achieved by the state-of-the-art SDMET for the temporal views. Also, it achieves a reduction of 50.98% to 81.13% (69.15% on average) in encoding time for inter-view, which is 29.31% and 15.03% more than the encoding time reduction achieved by the state-of-the-art schemes. Meanwhile, the average Peak Signal-to-Noise Ratio (PSNR) degrades 0.05 dB and average bit rate increases by -0.37%, which is negligible. © 1963-12012 IEEE.
3d Video Transmissions Over Lte: A Performance Evaluation The emerging broadband cellular technology, i.e. the Long Term Evolution (LTE), aims to support different services with high data rates and strict Quality of Service (QoS) requirements. It can thus be considered a very promising architecture for the three-dimensional (3D) video transmission. Differently from the conventional 2D video, the depth perception is the most important aspect characterizing 3D streams. It significantly influences the mobile users Quality of Experience (QoE). However, it requires the transmission of additional information as well as more bandwidth and a lower loss probability. The goal of this paper is to investigate how both 3D video formats and their average encoding rate impact on the quality experienced by the end users when the video flow is delivered through the LTE network. To this aim, objective metrics like the ratio of lost packets, Peak Signal to Noise Ratio, delay, and goodput are adopted for measuring QoS and QoE degrees. At the end of this analysis, we provide some important considerations about the LTE effectiveness for 3D video delivering.
A reliable decentralized Peer-to-Peer Video-on-Demand system using helpers. We propose a decentralized Peer-to-Peer (P2P) Videoon-Demand (VoD) system. The traditional data center architecture is eliminated and is replaced by a large set of distributed, dynamic and individually unreliable helpers. The system leverages the strength of numbers to effect reliable cooperative content distribution, removing the drawbacks of conventional data center architectures including complexity of maintenance, high power consumption and lack of scalability. In the proposed VoD system, users and helper "servelets" cooperate in a P2P manner to deliver the video stream. Helpers are preloaded with only a small fraction of parity coded video data packets, and form into swarms each serving partial video content. The total number of helpers is optimized to guarantee high quality of service. In cases of helper churn, the helper network is also able to regenerate itself by users and helpers working cooperatively to repair the lost data, which yields a highly reliable system. Analysis and simulation results corroborate the feasibility and effectiveness of the proposed architecture.
A new methodology to derive objective quality assessment metrics for scalable multiview 3D video coding With the growing demand for 3D video, efforts are underway to incorporate it in the next generation of broadcast and streaming applications and standards. 3D video is currently available in games, entertainment, education, security, and surveillance applications. A typical scenario for multiview 3D consists of several 3D video sequences captured simultaneously from the same scene with the help of multiple cameras from different positions and through different angles. Multiview video coding provides a compact representation of these multiple views by exploiting the large amount of inter-view statistical dependencies. One of the major challenges in this field is how to transmit the large amount of data of a multiview sequence over error prone channels to heterogeneous mobile devices with different bandwidth, resolution, and processing/battery power, while maintaining a high visual quality. Scalable Multiview 3D Video Coding (SMVC) is one of the methods to address this challenge; however, the evaluation of the overall visual quality of the resulting scaled-down video requires a new objective perceptual quality measure specifically designed for scalable multiview 3D video. Although several subjective and objective quality assessment methods have been proposed for multiview 3D sequences, no comparable attempt has been made for quality assessment of scalable multiview 3D video. In this article, we propose a new methodology to build suitable objective quality assessment metrics for different scalable modalities in multiview 3D video. Our proposed methodology considers the importance of each layer and its content as a quality of experience factor in the overall quality. Furthermore, in addition to the quality of each layer, the concept of disparity between layers (inter-layer disparity) and disparity between the units of each layer (intra-layer disparity) is considered as an effective feature to evaluate overall perceived quality more accurately. Simulation results indicate that by using this methodology, more efficient objective quality assessment metrics can be introduced for each multiview 3D video scalable modalities.
Adaptive 3D multi-view video streaming over P2P networks Streaming 3D multi-view video to multiple clients simultaneously remains a highly challenging problem due to the high-volume of data involved and the inherent limitations imposed by the delivery networks. Delivery of multimedia streams over Peer-to-Peer (P2P) networks has gained great interest due to its ability to maximise link utilisation, preventing the transport of multiple copies of the same packet for many users. On the other hand, the quality of experience can still be significantly degraded by dynamic variations caused by congestions, unless content-aware precautionary mechanisms and adaptation methods are deployed. In this paper, a novel, adaptive multi-view video streaming over a P2P system is introduced which addresses the next generation high resolution multi-view users' experiences with autostereoscopic displays. The solution comprises the extraction of low-overhead supplementary metadata at the media encoding server that is distributed through the network and used by clients performing network adaptation. In the proposed concept, pre-selected views are discarded at a times of network congestion and reconstructed with high quality using the metadata and the neighbouring views. The experimental results show that the robustness of P2P multi-view streaming using the proposed adaptation scheme is significantly increased under congestion.
High-quality video view interpolation using a layered representation The ability to interactively control viewpoint while watching a video is an exciting application of image-based rendering. The goal of our work is to render dynamic scenes with interactive viewpoint control using a relatively small number of video cameras. In this paper, we show how high-quality video-based rendering of dynamic scenes can be accomplished using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms. Once these video streams have been processed, we can synthesize any intermediate view between cameras at any time, with the potential for space-time manipulation.In our approach, we first use a novel color segmentation-based stereo algorithm to generate high-quality photoconsistent correspondences across all camera views. Mattes for areas near depth discontinuities are then automatically extracted to reduce artifacts during view synthesis. Finally, a novel temporal two-layer compressed representation that handles matting is developed for rendering at interactive rates.
Is QoE estimation based on QoS parameters sufficient for video quality assessment? Internet Service providers offer today a variety of of audio, video and data services. Traditional approaches for quality assessment of video services were based on Quality of Service (QoS) measurement. These measurements are considered as performance measurement at the network level. However, in order to make accurate quality assessment, the video must be assessed subjectively by the user. However, QoS parameters are easier to be obtained than the QoE subjective scores. Therefore, some recent works have investigated objective approaches to estimate QoE scores based on measured QoS parameters. The main purpose is the control of QoE based on QoS measurements. This paper presents several solutions and models presented in the literature. We discuss some other factors that must be considered in the mapping process between QoS and QoE. The impact of these factors on perceived QoE is verified through subjective tests.
NIST Net: a Linux-based network emulation tool Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and experimentation with network code through emulation. NIST Net enables experimenters to model and effect arbitrary performance dynamics (packet delay, jitter, bandwidth limitations, congestion, packet loss and duplication) on live IP packets passing through a commodity Linux-based PC router. We describe the emulation capabilities of NIST Net; examine its architecture; and discuss some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead. Calibration results are provided to quantify the fidelity and performance of NIST Net over a wide range of offered loads (up to 1 Gbps), and a diverse set of emulated performance dynamics.
Genetic Learning Of Fuzzy Rule-Based Classification Systems Cooperating With Fuzzy Reasoning Methods In this paper, we present a multistage genetic learning process for obtaining linguistic fuzzy rule-based classification systems that integrates fuzzy reasoning methods cooperating with the fuzzy rule base and learns the best set of linguistic hedges for the linguistic variable terms. We show the application of the genetic learning process to two well known sample bases, and compare the results with those obtained from different learning algorithms. The results show the good behavior of the proposed method, which maintains the linguistic description of the fuzzy rules. (C) 1998 John Wiley & Sons, Inc.
Remembrance of circuits past: macromodeling by data mining in large analog design spaces The introduction of simulation-based analog synthesis tools creates a new challenge for analog modeling. These tools routinely visit 103 to 105 fully simulated circuit solution candidates. What might we do with all this circuit data? We show how to adapt recent ideas from large-scale data mining to build models that capture significant regions of this visited performance space, parameterized by variables manipulated by synthesis, trained by the data points visited during synthesis. Experimental results show that we can automatically build useful nonlinear regression models for large analog design spaces.
Efficient face candidates selector for face detection In this paper an efficient face candidates selector is proposed for face detection tasks in still gray level images. The proposed method acts as a selective attentional mechanism. Eye-analogue segments at a given scale are discovered by finding regions which are roughly as large as real eyes and are darker than their neighborhoods. Then a pair of eye-analogue segments are hypothesized to be eyes in a face and combined into a face candidate if their placement is consistent with the anthropological characteristic of human eyes. The proposed method is robust in that it can deal with illumination changes and moderate rotations. A subset of the FERET data set and the BioID face database are used to evaluate the proposed method. The proposed face candidates selector is successful in 98.75% and 98.6% cases, respectively.
New adaboost algorithm based on interval-valued fuzzy sets This paper presents a new extension of AdaBoost algorithm based on interval-valued fuzzy sets. This extension is for the weights used in samples of the training sets. The original weights are the real number from the interval [0, 1]. In our approach the weights are represented by the interval-valued fuzzy set, that is any weight has a lower and upper membership function. The same value of lower and upper membership function has a weight of the appropriate weak classifier. In our study we use the boosting by the reweighting method where each weak classifier is based on the recursive partitioning method. The described algorithm was tested on two generation data sets and two sets from UCI repository. The obtained results are compared with the original AdaBoost algorithm.
1.004115
0.004693
0.004441
0.004032
0.003779
0.003636
0.002192
0.00135
0.000032
0.000002
0
0
0
0
Tensor Decompositions for Signal Processing Applications: From two-way to multiway component analysis. The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train.
Phase Noise and Noise Induced Frequency Shift in Stochastic Nonlinear Oscillators Phase noise plays an important role in the performances of electronic oscillators. Traditional approaches describe the phase noise problem as a purely diffusive process. In this paper we develop a novel phase model reduction technique for phase noise analysis of nonlinear oscillators subject to stochastic inputs. We obtain analytical equations for both the phase deviation and the probability density function of the phase deviation. We show that, in general, the phase reduced models include non Markovian terms. Under the Markovian assumption, we demonstrate that the effect of white noise is to generate both phase diffusion and a frequency shift, i.e. phase noise is best described as a convection-diffusion process. The analysis of a solvable model shows the accuracy of our theory, and that it gives better predictions than traditional phase models.
Macromodel Generation for BioMEMS Components Using a Stabilized Balanced Truncation Plus Trajectory Piecewise-Linear Approach In this paper, we present a technique for automatically extracting nonlinear macromodels of biomedical microelectromechanical systems devices from physical simulation. The technique is a modification of the recently developed trajectory piecewise-linear approach, but uses ideas from balanced truncation to produce much lower order and more accurate models. The key result is a perturbation analysis of an instability problem with the reduction algorithm, and a simple modification that makes the algorithm more robust. Results are presented from examples to demonstrate dramatic improvements in reduced model accuracy and show the limitations of the method.
Identification of PARAFAC-Volterra cubic models using an Alternating Recursive Least Squares algorithm A broad class of nonlinear systems can be modelled by the Volterra series representation. However, its practical use in nonlinear system identification is sometimes limited due to the large number of parameters associated with the Volterra filters structure. This paper is concerned with the problem of identification of third-order Volterra kernels. A tensorial decomposition called PARAFAC is used to represent such a kernel. A new algorithm called the Alternating Recursive Least Squares (ARLS) algorithm is applied to identify this decomposition for estimating the Volterra kernels of cubic systems. This method significantly reduces the computational complexity of Volterra kernel estimation. Simulation results show the ability of the proposed method to achieve a good identification and an important complexity reduction, i.e. representation of Volterra cubic kernels with few parameters.
A tensor-based volterra series black-box nonlinear system identification and simulation framework. Tensors are a multi-linear generalization of matrices to their d-way counterparts, and are receiving intense interest recently due to their natural representation of high-dimensional data and the availability of fast tensor decomposition algorithms. Given the input-output data of a nonlinear system/circuit, this paper presents a nonlinear model identification and simulation framework built on top of Volterra series and its seamless integration with tensor arithmetic. By exploiting partially-symmetric polyadic decompositions of sparse Toeplitz tensors, the proposed framework permits a pleasantly scalable way to incorporate high-order Volterra kernels. Such an approach largely eludes the curse of dimensionality and allows computationally fast modeling and simulation beyond weakly nonlinear systems. The black-box nature of the model also hides structural information of the system/circuit and encapsulates it in terms of compact tensors. Numerical examples are given to verify the efficacy, efficiency and generality of this tensor-based modeling and simulation framework.
Bayesian Sparse Tucker Models for Dimension Reduction and Tensor Completion Tucker decomposition is the cornerstone of modern machine learning on tensorial data analysis, which have attracted considerable attention for multiway feature extraction, compressive sensing, and tensor completion. The most challenging problem is related to determination of model complexity (i.e., multilinear rank), especially when noise and missing data are present. In addition, existing methods cannot take into account uncertainty information of latent factors, resulting in low generalization performance. To address these issues, we present a class of probabilistic generative Tucker models for tensor decomposition and completion with structural sparsity over multilinear latent space. To exploit structural sparse modeling, we introduce two group sparsity inducing priors by hierarchial representation of Laplace and Student-t distributions, which facilitates fully posterior inference. For model learning, we derived variational Bayesian inferences over all model (hyper)parameters, and developed efficient and scalable algorithms based on multilinear operations. Our methods can automatically adapt model complexity and infer an optimal multilinear rank by the principle of maximum lower bound of model evidence. Experimental results and comparisons on synthetic, chemometrics and neuroimaging data demonstrate remarkable performance of our models for recovering ground-truth of multilinear rank and missing entries.
Modelling and simulation of autonomous oscillators with random parameters Abstract: We consider periodic problems of autonomous systems of ordinary differential equations or differential algebraic equations. To quantify uncertainties of physical parameters, we introduce random variables in the systems. Phase conditions are required to compute the resulting periodic random process. It follows that the variance of the process depends on the choice of the phase condition. We derive a necessary condition for a random process with a minimal total variance by the calculus of variations. A corresponding numerical method is constructed based on the generalised polynomial chaos. We present numerical simulations of two test examples.
Fourier Series Approximation for Max Operation in Non-Gaussian and Quadratic Statistical Static Timing Analysis The most challenging problem in the current block-based statistical static timing analysis (SSTA) is how to handle the max operation efficiently and accurately. Existing SSTA techniques suffer from limited modeling capability by using a linear delay model with Gaussian distribution, or have scalability problems due to expensive operations involved to handle non-Gaussian variation sources or nonlinear delays. To overcome these limitations, we propose efficient algorithms to handle the max operation in SSTA with both quadratic delay dependency and non-Gaussian variation sources simultaneously. Based on such algorithms, we develop an SSTA flow with quadratic delay model and non-Gaussian variation sources. All the atomic operations, max and add, are calculated efficiently via either closed-form formulas or low dimension (at most 2-D) lookup tables. We prove that the complexity of our algorithm is linear in both variation sources and circuit sizes, hence our algorithm scales well for large designs. Compared to Monte Carlo simulation for non-Gaussian variation sources and nonlinear delay models, our approach predicts the mean, standard deviation and 95% percentile point with less than 2% error, and the skewness with less than 10% error.
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f∈CN and a randomly chosen set of frequencies Ω. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=στ∈Tf(τ)δ(t-τ) obeying |T|≤CM·(log N)-1 · |Ω| for some constant CM0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N-M), f can be reconstructed exactly as the solution to the ℓ1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for CM which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|·logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N-M) would in general require a number of frequency samples at least proportional to |T|·logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
The Combination Technique for the Sparse Grid Solution of PDE's on Multiprocessor Machines . We present a new method for the solution of partial differential equations. In contrastto the usual approach which needs in the 2-D case O(h02n ) grid points, our combination techniqueworks with only O(h01n ld(h01n )) grid points, where hn denotes the employed grid size. The accuracy ofthe obtained solution deteriorates only slightly from O(h2n ) to O(h2n ld(h01n )) for a sufficiently smoothsolution. Additionally, the new method is perfectly suited for...
Multi-level Monte Carlo Finite Element method for elliptic PDEs with stochastic coefficients In Monte Carlo methods quadrupling the sample size halves the error. In simulations of stochastic partial differential equations (SPDEs), the total work is the sample size times the solution cost of an instance of the partial differential equation. A Multi-level Monte Carlo method is introduced which allows, in certain cases, to reduce the overall work to that of the discretization of one instance of the deterministic PDE. The model problem is an elliptic equation with stochastic coefficients. Multi-level Monte Carlo errors and work estimates are given both for the mean of the solutions and for higher moments. The overall complexity of computing mean fields as well as k-point correlations of the random solution is proved to be of log-linear complexity in the number of unknowns of a single Multi-level solve of the deterministic elliptic problem. Numerical examples complete the theoretical analysis.
Group Decision Making Based On Computing With Linguistic Variables And An Example In Information System Selection In decision making problems with multiple experts as group decision making (GDM), each expert expresses his/her preferences or judgments depending on his/her own knowledge or experiences. A more realistic approach may be to use linguistic assessments instead of numerical values. Therefore, decision-makers' opinions are described by linguistic variables which can be expressed in different types. In this paper, a transformation method will be presented to transform the non-homogeneous linguistic information to a standard linguistic term set. Then, a decision-making model is proposed based on computing with 2-tuple linguistic variables to deal with the group decision making problems. According to the concept of the TOPSIS, a 2-tuple linguistic closeness coefficient is defined to determine the ranking order of all alternatives by, calculating the distances to the both linguistic positive-ideal solution (LPIS) and linguistic negative-ideal solution (LNIS) simultaneously. Finally, an example of information system selection. is implemented at the end of this paper to demonstrate the procedure for the proposed method.
Delay fault simulation with bounded gate delay mode Previously reported work on path and gate delay tests fail to analyze path reconvergences when a bounded gate delay model is used. While robust path delay tests are of the highest quality, most path faults are only testable nonrobustly. Many non-robust tests are usually found but, in practice, are easily invalidated by hazards. The invalidation of non-robust tests occurs primarily due to non-zero delays of off-path circuit elements that may reconverge. Thus, non-robust tests are of limited value when process variations cause gate delays to vary. For gate delay faults, failure to recognize the correlations among the ambiguity waveforms at inputs of reconvergent gates cause fault coverages to be optimistic. This paper enhances a recently published ambiguity simulation algorithm [5] to accurately measure both non-robust path and gate delay coverages for the bounded delay model. Experimental results for the ISCAS circuits show accurate results are often 20-30% less than the optimistic ones that fail to analyze signal reconvergences.
Bacterial Community Reconstruction Using A Single Sequencing Reaction Bacteria are the unseen majority on our planet, with millions of species and comprising most of the living protoplasm. While current methods enable in-depth study of a small number of communities, a simple tool for breadth studies of bacterial population composition in a large number of samples is lacking. We propose a novel approach for reconstruction of the composition of an unknown mixture of bacteria using a single Sanger-sequencing reaction of the mixture. This method is based on compressive sensing theory, which deals with reconstruction of a sparse signal using a small number of measurements. Utilizing the fact that in many cases each bacterial community is comprised of a small subset of the known bacterial species, we show the feasibility of this approach for determining the composition of a bacterial mixture. Using simulations, we show that sequencing a few hundred base-pairs of the 16S rRNA gene sequence may provide enough information for reconstruction of mixtures containing tens of species, out of tens of thousands, even in the presence of realistic measurement noise. Finally, we show initial promising results when applying our method for the reconstruction of a toy experimental mixture with five species. Our approach may have a potential for a practical and efficient way for identifying bacterial species compositions in biological samples.
1.10204
0.10408
0.10408
0.10408
0.10408
0.05204
0.015324
0.001333
0.000146
0.000009
0
0
0
0
Combining geometry and combinatorics: a unified approach to sparse signal recovery Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix � and then uses linear programming,to decode information about x from �x. The com- binatorial approach constructs � and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of high-quality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the ℓp norm for p � 1, and then show that unbalanced expanders are essentially equivalent to RIP-p matrices. From known deterministic constructions for such matrices, we obtain new deterministic mea- surement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance.
The statistical restricted isometry property and the Wigner semicircle distribution of incoherent dictionaries In this paper we formulate and prove a statistical version of the Candès-Tao restricted isometry property (SRIP for short) which holds in gen- eral for any incoherent dictionary which is a disjoint union of orthonormal bases. In addition, we prove that, under appropriate normalization, the eigen- values of the associated Gram matrix ‡uctuate around � = 1 according to the Wigner semicircle distribution. The result is then applied to various dictio- naries that arise naturally in the setting of …nite harmonic analysis, giving, in particular, a better understanding on a remark of Applebaum-Howard-Searle- Calderbank concerning RIP for the Heisenberg dictionary of chirp like func- tions.
Space-optimal heavy hitters with strong error bounds The problem of finding heavy hitters and approximating the frequencies of items is at the heart of many problems in data stream analysis. It has been observed that several proposed solutions to this problem can outperform their worst-case guarantees on real data. This leads to the question of whether some stronger bounds can be guaranteed. We answer this in the positive by showing that a class of "counter-based algorithms" (including the popular and very space-efficient FREQUENT and SPACESAVING algorithms) provide much stronger approximation guarantees than previously known. Specifically, we show that errors in the approximation of individual elements do not depend on the frequencies of the most frequent elements, but only on the frequency of the remaining "tail." This shows that counter-based methods are the most space-efficient (in fact, space-optimal) algorithms having this strong error bound. This tail guarantee allows these algorithms to solve the "sparse recovery" problem. Here, the goal is to recover a faithful representation of the vector of frequencies, f. We prove that using space O(k), the algorithms construct an approximation f* to the frequency vector f so that the L1 error ||f -- f*||1 is close to the best possible error minf2 ||f2 -- f||1, where f2 ranges over all vectors with at most k non-zero entries. This improves the previously best known space bound of about O(k log n) for streams without element deletions (where n is the size of the domain from which stream elements are drawn). Other consequences of the tail guarantees are results for skewed (Zipfian) data, and guarantees for accuracy of merging multiple summarized streams.
SPARLS: A Low Complexity Recursive $\mathcal{L}_1$-Regularized Least Squares Algorithm We develop a Recursive $\mathcal{L}_1$-Regularized Least Squares (SPARLS) algorithm for the estimation of a sparse tap-weight vector in the adaptive filtering setting. The SPARLS algorithm exploits noisy observations of the tap-weight vector output stream and produces its estimate using an Expectation-Maximization type algorithm. Simulation studies in the context of channel estimation, employing multi-path wireless channels, show that the SPARLS algorithm has significant improvement over the conventional widely-used Recursive Least Squares (RLS) algorithm, in terms of both mean squared error (MSE) and computational complexity.
Construction of a Large Class of Deterministic Sensing Matrices That Satisfy a Statistical Isometry Property Compressed Sensing aims to capture attributes of k-sparse signals using very few measurements. In the standard compressed sensing paradigm, the N ?? C measurement matrix ?? is required to act as a near isometry on the set of all k-sparse signals (restricted isometry property or RIP). Although it is known that certain probabilistic processes generate N ?? C matrices that satisfy RIP with high probability, there is no practical algorithm for verifying whether a given sensing matrix ?? has this property, crucial for the feasibility of the standard recovery algorithms. In contrast, this paper provides simple criteria that guarantee that a deterministic sensing matrix satisfying these criteria acts as a near isometry on an overwhelming majority of k-sparse signals; in particular, most such signals have a unique representation in the measurement domain. Probability still plays a critical role, but it enters the signal model rather than the construction of the sensing matrix. An essential element in our construction is that we require the columns of the sensing matrix to form a group under pointwise multiplication. The construction allows recovery methods for which the expected performance is sub-linear in C, and only quadratic in N, as compared to the super-linear complexity in C of the Basis Pursuit or Matching Pursuit algorithms; the focus on expected performance is more typical of mainstream signal processing than the worst case analysis that prevails in standard compressed sensing. Our framework encompasses many families of deterministic sensing matrices, including those formed from discrete chirps, Delsarte-Goethals codes, and extended BCH codes.
Counter braids: a novel counter architecture for per-flow measurement Fine-grained network measurement requires routers and switches to update large arrays of counters at very high link speed (e.g. 40 Gbps). A naive algorithm needs an infeasible amount of SRAM to store both the counters and a flow-to-counter association rule, so that arriving packets can update corresponding counters at link speed. This has made accurate per-flow measurement complex and expensive, and motivated approximate methods that detect and measure only the large flows. This paper revisits the problem of accurate per-flow measurement. We present a counter architecture, called Counter Braids, inspired by sparse random graph codes. In a nutshell, Counter Braids "compresses while counting". It solves the central problems (counter space and flow-to-counter association) of per-flow measurement by "braiding" a hierarchy of counters with random graphs. Braiding results in drastic space reduction by sharing counters among flows; and using random graphs generated on-the-fly with hash functions avoids the storage of flow-to-counter association. The Counter Braids architecture is optimal (albeit with a complex decoder) as it achieves the maximum compression rate asymptotically. For implementation, we present a low-complexity message passing decoding algorithm, which can recover flow sizes with essentially zero error. Evaluation on Internet traces demonstrates that almost all flow sizes are recovered exactly with only a few bits of counter space per flow.
Joint Source–Channel Communication for Distributed Estimation in Sensor Networks Power and bandwidth are scarce resources in dense wireless sensor networks and it is widely recognized that joint optimization of the operations of sensing, processing and communication can result in significant savings in the use of network resources. In this paper, a distributed joint source-channel communication architecture is proposed for energy-efficient estimation of sensor field data at a distant destination and the corresponding relationships between power, distortion, and latency are analyzed as a function of number of sensor nodes. The approach is applicable to a broad class of sensed signal fields and is based on distributed computation of appropriately chosen projections of sensor data at the destination - phase-coherent transmissions from the sensor nodes enable exploitation of the distributed beamforming gain for energy efficiency. Random projections are used when little or no prior knowledge is available about the signal field. Distinct features of the proposed scheme include: (1) processing and communication are combined into one distributed projection operation; (2) it virtually eliminates the need for in-network processing and communication; (3) given sufficient prior knowledge about the sensed data, consistent estimation is possible with increasing sensor density even with vanishing total network power; and (4) consistent signal estimation is possible with power and latency requirements growing at most sublinearly with the number of sensor nodes even when little or no prior knowledge about the sensed data is assumed at the sensor nodes.
Weighted Superimposed Codes and Constrained Integer Compressed Sensing We introduce a new family of codes, termed weighted superimposed codes (WSCs). This family generalizes the class of Euclidean superimposed codes (ESCs), used in multiuser identification systems. WSCs allow for discriminating all bounded, integer-valued linear combinations of real-valued codewords that satisfy prescribed norm and nonnegativity constraints. By design, WSCs are inherently noise tolerant. Therefore, these codes can be seen as special instances of robust compressed sensing schemes. The main results of the paper are lower and upper bounds on the largest achievable code rates of several classes of WSCs. These bounds suggest that, with the codeword and weighting vector constraints at hand, one can improve the code rates achievable by standard compressive sensing techniques.
Data compression and harmonic analysis In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon's R(D) theory in the case of Gaussian stationary processes, which says that transforming into a Fourier basis followed by block coding gives an optimal lossy compression technique; practical developments like transform-based image compression have been inspired by this result. In this paper we also discuss connections perhaps less familiar to the information theory community, growing out of the field of harmonic analysis. Recent harmonic analysis constructions, such as wavelet transforms and Gabor transforms, are essentially optimal transforms for transform coding in certain settings. Some of these transforms are under consideration for future compression standards. We discuss some of the lessons of harmonic analysis in this century. Typically, the problems and achievements of this field have involved goals that were not obviously related to practical data compression, and have used a language not immediately accessible to outsiders. Nevertheless, through an extensive generalization of what Shannon called the “sampling theorem”, harmonic analysis has succeeded in developing new forms of functional representation which turn out to have significant data compression interpretations. We explain why harmonic analysis has interacted with data compression, and we describe some interesting recent ideas in the field that may affect data compression in the future
The effective dimension and quasi-Monte Carlo integration Quasi-Monte Carlo (QMC) methods are successfully used for high-dimensional integrals arising in many applications. To understand this success, the notion of effective dimension has been introduced. In this paper, we analyse certain function classes commonly used in QMC methods for empirical and theoretical investigations and show that the problem of determining their effective dimension is analytically tractable. For arbitrary square integrable functions, we propose a numerical algorithm to compute their truncation dimension. We also consider some realistic problems from finance: the pricing of options. We study the special structure of the corresponding integrands by determining their effective dimension and show how large the effective dimension can be reduced and how much the accuracy of QMC estimates can be improved by using the Brownian bridge and the principal component analysis techniques. A critical discussion of the influence of these techniques on the QMC error is presented. The connection between the effective dimension and the performance of QMC methods is demonstrated by examples.
Modelling and simulation of autonomous oscillators with random parameters Abstract: We consider periodic problems of autonomous systems of ordinary differential equations or differential algebraic equations. To quantify uncertainties of physical parameters, we introduce random variables in the systems. Phase conditions are required to compute the resulting periodic random process. It follows that the variance of the process depends on the choice of the phase condition. We derive a necessary condition for a random process with a minimal total variance by the calculus of variations. A corresponding numerical method is constructed based on the generalised polynomial chaos. We present numerical simulations of two test examples.
Mono-multi bipartite Ramsey numbers, designs, and matrices Eroh and Oellermann defined BRR(G1, G2) as the smallest N such that any edge coloring of the complete bipartite graph KN, N contains either a monochromatic G1 or a multicolored G2. We restate the problem of determining BRR(K1,λ, Kr,s) in matrix form and prove estimates and exact values for several choices of the parameters. Our general bound uses Füredi's result on fractional matchings of uniform hypergraphs and we show that it is sharp if certain block designs exist. We obtain two sharp results for the case r = s = 2: we prove BRR(K1,λ, K2,2) = 3λ - 2 and that the smallest n for which any edge coloring of Kλ,n contains either a monochromatic K1,λ or a multicolored K2,2 is λ2.
Multi-criteria analysis for a maintenance management problem in an engine factory: rational choice The industrial organization needs to develop better methods for evaluating the performance of its projects. We are interested in the problems related to pieces with differing degrees of dirt. In this direction, we propose and evaluate a maintenance decision problem of maintenance in an engine factory that is specialized in the production, sale and maintenance of medium and slow speed four stroke engines. The main purpose of this paper is to study the problem by means of the analytic hierarchy process to obtain the weights of criteria, and the TOPSIS method as multicriteria decision making to obtain the ranking of alternatives, when the information was given in linguistic terms.
An image super-resolution scheme based on compressive sensing with PCA sparse representation Image super-resolution (SR) reconstruction has been an important research fields due to its wide applications. Although many SR methods have been proposed, there are still some problems remain to be solved, and the quality of the reconstructed high-resolution (HR) image needs to be improved. To solve these problems, in this paper we propose an image super-resolution scheme based on compressive sensing theory with PCA sparse representation. We focus on the measurement matrix design of the CS process and the implementation of the sparse representation function for the PCA transformation. The measurement matrix design is based on the relation between the low-resolution (LR) image and the reconstructed high-resolution (HR) image. While the implementation of the PCA sparse representation function is based on the PCA transformation process. According to whether the covariance matrix of the HR image is known or not, two kinds of SR models are given. Finally the experiments comparing the proposed scheme with the traditional interpolation methods and CS scheme with DCT sparse representation are conducted. The experiment results both on the smooth image and the image with complex textures show that the proposed scheme in this paper is effective.
1.012298
0.013575
0.012678
0.011135
0.006208
0.002524
0.000912
0.0003
0.000042
0
0
0
0
0
Fixation Prediction for 360° Video Streaming in Head-Mounted Virtual Reality We study the problem of predicting the Field-of-Views (FoVs) of viewers watching 360° videos using commodity Head-Mounted Displays (HMDs). Existing solutions either use the viewer's current orientation to approximate the FoVs in the future, or extrapolate future FoVs using the historical orientations and dead-reckoning algorithms. In this paper, we develop fixation prediction networks that concurrently leverage sensor- and content-related features to predict the viewer fixation in the future, which is quite different from the solutions in the literature. The sensor-related features include HMD orientations, while the content-related features include image saliency maps and motion maps. We build a 360° video streaming testbed to HMDs, and recruit twenty-five viewers to watch ten 360° videos. We then train and validate two design alternatives of our proposed networks, which allows us to identify the better-performing design with the optimal parameter settings. Trace-driven simulation results show the merits of our proposed fixation prediction networks compared to the existing solutions, including: (i) lower consumed bandwidth, (ii) shorter initial buffering time, and (iii) short running time.
QoE-Based SVC Layer Dropping in LTE Networks Using Content-Aware Layer Priorities The increasing popularity of mobile video streaming applications has led to a high volume of video traffic in mobile networks. As the base station, for instance, the eNB in LTE networks, has limited physical resources, it can be overloaded by this traffic. This problem can be addressed by using Scalable Video Coding (SVC), which allows the eNB to drop layers of the video streams to dynamically adapt the bitrate. The impact of bitrate adaptation on the Quality of Experience (QoE) for the users depends on the content characteristics of videos. As the current mobile network architectures do not support the eNB in obtaining video content information, QoE optimization schemes with explicit signaling of content information have been proposed. These schemes, however, require the eNB or a specific optimization module to process the video content on the fly in order to extract the required information. This increases the computation and signaling overhead significantly, raising the OPEX for mobile operators. To address this issue, in this article, a content-aware (CA) priority marking and layer dropping scheme is proposed. The CA priority indicates a transmission order for the layers of all transmitted videos across all users, resulting from a comparison of their utility versus rate characteristics. The CA priority values can be determined at the P-GW on the fly, allowing mobile operators to control the priority marking process. Alternatively, they can be determined offline at the video servers, avoiding real-time computation in the core network. The eNB can perform content-aware SVC layer dropping using only the priority values. No additional content processing is required. The proposed scheme is lightweight both in terms of architecture and computation. The improvement in QoE is substantial and very close to the performance obtained with the computation and signaling-intensive QoE optimization schemes.
Advanced Transport Options for the Dynamic Adaptive Streaming over HTTP. Multimedia streaming over HTTP is no longer a niche research topic as it has entered our daily live. The common assumption is that it is deployed on top of the existing infrastructure utilizing application (HTTP) and transport (TCP) layer protocols as is. Interestingly, standards like MPEGu0027s Dynamic Adaptive Streaming over HTTP (DASH) do not mandate the usage of any specific transport protocol allowing for sufficient deployment flexibility which is further supported by emerging developments within both protocol layers. This paper investigates and evaluates the usage of advanced transport options for the dynamic adaptive streaming over HTTP. We utilize a common test setup to evaluate HTTP/2.0 and Googleu0027s Quick UDP Internet Connections (QUIC) protocol in the context of DASH-based services.
Prioritized Buffer Control in Two-tier 360 Video Streaming 360 degree video compression and streaming is one of the key components of Virtual Reality (VR) applications. In 360 video streaming, a user may freely navigate through the captured 3D environment by changing her desired viewing direction. Only a small portion of the entire 360 degree video is watched at any time. Streaming the entire 360 degree raw video is therefore unnecessary and bandwidth-consuming. One the other hand, only streaming the video in the predicted user's view direction will introduce streaming discontinuity whenever the the prediction is wrong. In this work, a two-tier 360 video streaming framework with prioritized buffer control is proposed to effectively accommodate the dynamics in both network bandwidth and viewing direction. Through simulations driven by real network bandwidth and viewing direction traces, we demonstrate that the proposed framework can significantly outperform the conventional 360 video streaming solutions.
MP-DASH: Adaptive Video Streaming Over Preference-Aware Multipath. Compared with using only a single wireless path such as WiFi, leveraging multipath (e.g., WiFi and cellular) can dramatically improve users' quality of experience (QoE) for mobile video streaming. However, Multipath TCP (MPTCP), the de-facto multipath solution, lacks the support to prioritize one path over another. When applied to video streaming, it may cause undesired network usage such as substantial over-utilization of the metered cellular link. In this paper, we propose MP-DASH, a multipath framework for video streaming with the awareness of network interface preferences from users. The basic idea behind MP-DASH is to strategically schedule video chunks' delivery and thus satisfy user preferences. MP-DASH can work with a wide range of off-the-shelf video rate adaptation algorithms with very small changes. Our extensive field studies at 33 locations in three U.S. states suggest that MP-DASH is very effective: it can reduce cellular usage by up to 99% and radio energy consumption by up to 85% with negligible degradation of QoE, compared with off-the-shelf MPTCP.
Practical, Real-time Centralized Control for CDN-based Live Video Delivery Live video delivery is expected to reach a peak of 50 Tbps this year. This surging popularity is fundamentally changing the Internet video delivery landscape. CDNs must meet users' demands for fast join times, high bitrates, and low buffering ratios, while minimizing their own cost of delivery and responding to issues in real-time. Wide-area latency, loss, and failures, as well as varied workloads ("mega-events" to long-tail), make meeting these demands challenging. An analysis of video sessions concluded that a centralized controller could improve user experience, but CDN systems have shied away from such designs due to the difficulty of quickly handling failures, a requirement of both operators and users. We introduce VDN, a practical approach to a video delivery network that uses a centralized algorithm for live video optimization. VDN provides CDN operators with real-time, fine-grained control. It does this in spite of challenges resulting from the wide-area (e.g., state inconsistency, partitions, failures) by using a hybrid centralized+distributed control plane, increasing average bitrate by 1.7x and decreasing cost by 2x in different scenarios.
Developing a predictive model of quality of experience for internet video Improving users' quality of experience (QoE) is crucial for sustaining the advertisement and subscription based revenue models that enable the growth of Internet video. Despite the rich literature on video and QoE measurement, our understanding of Internet video QoE is limited because of the shift from traditional methods of measuring video quality (e.g., Peak Signal-to-Noise Ratio) and user experience (e.g., opinion scores). These have been replaced by new quality metrics (e.g., rate of buffering, bitrate) and new engagement centric measures of user experience (e.g., viewing time and number of visits). The goal of this paper is to develop a predictive model of Internet video QoE. To this end, we identify two key requirements for the QoE model: (1) it has to be tied in to observable user engagement and (2) it should be actionable to guide practical system design decisions. Achieving this goal is challenging because the quality metrics are interdependent, they have complex and counter-intuitive relationships to engagement measures, and there are many external factors that confound the relationship between quality and engagement (e.g., type of video, user connectivity). To address these challenges, we present a data-driven approach to model the metric interdependencies and their complex relationships to engagement, and propose a systematic framework to identify and account for the confounding factors. We show that a delivery infrastructure that uses our proposed model to choose CDN and bitrates can achieve more than 20\\% improvement in overall user engagement compared to strawman approaches.
Estimators and tail bounds for dimension reduction in lα (0 < α ≤ 2) using stable random projections. The method of stable random projections is popular in data stream computations, data mining, information retrieval, and machine learning, for efficiently computing the lα (0 < α ≤ 2) distances using a small (memory) space, in one pass of the data. We propose algorithms based on (1) the geometric mean estimator, for all 0 <α ≤ 2, and (2) the harmonic mean estimator, only for small α (e.g., α < 0.344). Compared with the previous classical work [27], our main contributions include: • The general sample complexity bound for α ≠ 1,2. For α = 1, [27] provided a nice argument based on the inverse of Cauchy density about the median, leading to a sample complexity bound, although they did not provide the constants and their proof restricted ε to be "small enough." For general α ≠ 1, 2, however, the task becomes much more difficult. [27] provided the "conceptual promise" that the sample complexity bound similar to that for α = 1 should exist for general α, if a "non-uniform algorithm based on t-quantile" could be implemented. Such a conceptual algorithm was only for supporting the arguments in [27], not a real implementation. We consider this is one of the main problems left open in [27]. In this study, we propose a practical algorithm based on the geometric mean estimator and derive the sample complexity bound for all 0 < α ≤ 2. • The practical and optimal algorithm for α = 0+ The l0 norm is an important case. Stable random projections can provide an approximation to the l0 norm using α → 0+. We provide an algorithm based on the harmonic mean estimator, which is simple and statistically optimal. Its tail bounds are sharper than the bounds derived based on the geometric mean. We also discover a (possibly surprising) fact: in boolean data, stable random projections using α = 0+ with the harmonic mean estimator will be about twice as accurate as (l2) normal random projections. Because high-dimensional boolean data are common, we expect this fact will be practically quite useful. • The precise theoretical analysis and practical implications We provide the precise constants in the tail bounds for both the geometric mean and harmonic mean estimators. We also provide the variances (either exact or asymptotic) for the proposed estimators. These results can assist practitioners to choose sample sizes accurately.
Coreference resolution using competition learning approach In this paper we propose a competition learning approach to coreference resolution. Traditionally, supervised machine learning approaches adopt the single-candidate model. Nevertheless the preference relationship between the antecedent candidates cannot be determined accurately in this model. By contrast, our approach adopts a twin-candidate learning model. Such a model can present the competition criterion for antecedent candidates reliably, and ensure that the most preferred candidate is selected. Furthermore, our approach applies a candidate filter to reduce the computational cost and data noises during training and resolution. The experimental results on MUC-6 and MUC-7 data set show that our approach can outperform those based on the single-candidate model.
A fuzzy approach to select the location of the distribution center The location selection of distribution center (DC) is one of the most important decision issues for logistics managers. Owing to vague concept frequently represented in decision data, a new multiple criteria decision-making method is proposed to solve the distribution center location selection problem under fuzzy environment. In the proposed method, the ratings of each alternative and the weight of each criterion are described by linguistic variables which can be expressed in triangular fuzzy numbers. The final evaluation value of each DC location is also expressed in a triangular fuzzy number. By calculating the difference of final evaluation value between each pair of DC locations, a fuzzy preference relation matrix is constructed to represent the intensity of the preferences of one plant location over another. And then, a stepwise ranking procedure is proposed to determine the ranking order of all candidate locations. Finally, a numerical example is solved to illustrate the procedure of the proposed method at the end of this paper.
Combining compound linguistic ordinal scale and cognitive pairwise comparison in the rectified fuzzy TOPSIS method for group decision making Group decision making is the process to explore the best choice among the screened alternatives under predefined criteria with corresponding weights from assessment of a group of decision makers. The Fuzzy TOPSIS taking an evaluated fuzzy decision matrix as input is a popular tool to analyze the ideal alternative. This research, however, finds that the classical fuzzy TOPSIS produces a misleading result due to some inappropriate definitions, and proposes the rectified fuzzy TOPSIS addressing two technical problems. As the decision accuracy also depends on the evaluation quality of the fuzzy decision matrix comprising rating scores and weights, this research applies compound linguistic ordinal scale as the fuzzy rating scale for expert judgments, and cognitive pairwise comparison for determining the fuzzy weights. The numerical case of a robot selection problem demonstrates the hybrid approach leading to the much reliable result for decision making, comparing with the conventional fuzzy Analytic Hierarchy Process and TOPSIS.
Looking for a good fuzzy system interpretability index: An experimental approach Interpretability is acknowledged as the main advantage of fuzzy systems and it should be given a main role in fuzzy modeling. Classical systems are viewed as black boxes because mathematical formulas set the mapping between inputs and outputs. On the contrary, fuzzy systems (if they are built regarding some constraints) can be seen as gray boxes in the sense that every element of the whole system can be checked and understood by a human being. Interpretability is essential for those applications with high human interaction, for instance decision support systems in fields like medicine, economics, etc. Since interpretability is not guaranteed by definition, a huge effort has been done to find out the basic constraints to be superimposed during the fuzzy modeling process. People talk a lot about interpretability but the real meaning is not clear. Understanding of fuzzy systems is a subjective task which strongly depends on the background (experience, preferences, and knowledge) of the person who makes the assessment. As a consequence, although there have been a few attempts to define interpretability indices, there is still not a universal index widely accepted. As part of this work, with the aim of evaluating the most used indices, an experimental analysis (in the form of a web poll) was carried out yielding some useful clues to keep in mind regarding interpretability assessment. Results extracted from the poll show the inherent subjectivity of the measure because we collected a huge diversity of answers completely different at first glance. However, it was possible to find out some interesting user profiles after comparing carefully all the answers. It can be concluded that defining a numerical index is not enough to get a widely accepted index. Moreover, it is necessary to define a fuzzy index easily adaptable to the context of each problem as well as to the user quality criteria.
Variation-aware interconnect extraction using statistical moment preserving model order reduction In this paper we present a stochastic model order reduction technique for interconnect extraction in the presence of process variabilities, i.e. variation-aware extraction. It is becoming increasingly evident that sampling based methods for variation-aware extraction are more efficient than more computationally complex techniques such as stochastic Galerkin method or the Neumann expansion. However, one of the remaining computational challenges of sampling based methods is how to simultaneously and efficiently solve the large number of linear systems corresponding to each different sample point. In this paper, we present a stochastic model reduction technique that exploits the similarity among the different solves to reduce the computational complexity of subsequent solves. We first suggest how to build a projection matrix such that the statistical moments and/or the coefficients of the projection of the stochastic vector on some orthogonal polynomials are preserved. We further introduce a proximity measure, which we use to determine apriori if a given system needs to be solved, or if it is instead properly represented using the currently available basis. Finally, in order to reduce the time required for the system assembly, we use the multivariate Hermite expansion to represent the system matrix. We verify our method by solving a variety of variation-aware capacitance extraction problems ranging from on-chip capacitance extraction in the presence of width and thickness variations, to off-chip capacitance extraction in the presence of surface roughness. We further solve very large scale problems that cannot be handled by any other state of the art technique.
Some general comments on fuzzy sets of type-2 This paper contains some general comments on the algebra of truth values of fuzzy sets of type 2. It details the precise mathematical relationship with the algebras of truth values of ordinary fuzzy sets and of interval-valued fuzzy sets. Subalgebras of the algebra of truth values and t-norms on them are discussed. There is some discussion of finite type-2 fuzzy sets. © 2008 Wiley Periodicals, Inc.
1.0432
0.048
0.048
0.048
0.024
0.013714
0.001371
0
0
0
0
0
0
0
A tensor-based volterra series black-box nonlinear system identification and simulation framework. Tensors are a multi-linear generalization of matrices to their d-way counterparts, and are receiving intense interest recently due to their natural representation of high-dimensional data and the availability of fast tensor decomposition algorithms. Given the input-output data of a nonlinear system/circuit, this paper presents a nonlinear model identification and simulation framework built on top of Volterra series and its seamless integration with tensor arithmetic. By exploiting partially-symmetric polyadic decompositions of sparse Toeplitz tensors, the proposed framework permits a pleasantly scalable way to incorporate high-order Volterra kernels. Such an approach largely eludes the curse of dimensionality and allows computationally fast modeling and simulation beyond weakly nonlinear systems. The black-box nature of the model also hides structural information of the system/circuit and encapsulates it in terms of compact tensors. Numerical examples are given to verify the efficacy, efficiency and generality of this tensor-based modeling and simulation framework.
Phase Noise and Noise Induced Frequency Shift in Stochastic Nonlinear Oscillators Phase noise plays an important role in the performances of electronic oscillators. Traditional approaches describe the phase noise problem as a purely diffusive process. In this paper we develop a novel phase model reduction technique for phase noise analysis of nonlinear oscillators subject to stochastic inputs. We obtain analytical equations for both the phase deviation and the probability density function of the phase deviation. We show that, in general, the phase reduced models include non Markovian terms. Under the Markovian assumption, we demonstrate that the effect of white noise is to generate both phase diffusion and a frequency shift, i.e. phase noise is best described as a convection-diffusion process. The analysis of a solvable model shows the accuracy of our theory, and that it gives better predictions than traditional phase models.
Macromodel Generation for BioMEMS Components Using a Stabilized Balanced Truncation Plus Trajectory Piecewise-Linear Approach In this paper, we present a technique for automatically extracting nonlinear macromodels of biomedical microelectromechanical systems devices from physical simulation. The technique is a modification of the recently developed trajectory piecewise-linear approach, but uses ideas from balanced truncation to produce much lower order and more accurate models. The key result is a perturbation analysis of an instability problem with the reduction algorithm, and a simple modification that makes the algorithm more robust. Results are presented from examples to demonstrate dramatic improvements in reduced model accuracy and show the limitations of the method.
Identification of PARAFAC-Volterra cubic models using an Alternating Recursive Least Squares algorithm A broad class of nonlinear systems can be modelled by the Volterra series representation. However, its practical use in nonlinear system identification is sometimes limited due to the large number of parameters associated with the Volterra filters structure. This paper is concerned with the problem of identification of third-order Volterra kernels. A tensorial decomposition called PARAFAC is used to represent such a kernel. A new algorithm called the Alternating Recursive Least Squares (ARLS) algorithm is applied to identify this decomposition for estimating the Volterra kernels of cubic systems. This method significantly reduces the computational complexity of Volterra kernel estimation. Simulation results show the ability of the proposed method to achieve a good identification and an important complexity reduction, i.e. representation of Volterra cubic kernels with few parameters.
Compact model order reduction of weakly nonlinear systems by associated transform AbstractWe advance a recently proposed approach, called the associated transform, for computing slim projection matrices serving high-order Volterra transfer functions in the context of weakly nonlinear model order reduction NMOR. The innovation is to carry out an association of multivariate Laplace variables in high-order multiple-input multiple-output transfer functions to generate univariate single-s transfer functions. In contrast to conventional projection-based NMOR which finds projection subspaces about every si in multivariate transfer functions, only that about a single s is required in the proposed approach. This leads to much more compact reduced-order models without compromising accuracy. Specifically, the proposed NMOR procedure first converts the original set of Volterra transfer functions into a new set of linear transfer functions, which then allows direct utilization of linear MOR techniques for modeling weakly nonlinear systems with either single-tone or multi-tone inputs. An adaptive algorithm is also given to govern the selection of appropriate basis orders in different Volterra transfer functions. Numerical examples then verify the effectiveness of the proposed scheme. Copyright © 2015 John Wiley & Sons, Ltd.
Model Reduction and Simulation of Nonlinear Circuits via Tensor Decomposition Model order reduction of nonlinear circuits (especially highly nonlinear circuits), has always been a theoretically and numerically challenging task. In this paper we utilize tensors (namely, a higher order generalization of matrices) to develop a tensor-based nonlinear model order reduction (TNMOR) algorithm for the efficient simulation of nonlinear circuits. Unlike existing nonlinear model order reduction methods, in TNMOR high-order nonlinearities are captured using tensors, followed by decomposition and reduction to a compact tensor-based reducedorder model. Therefore, TNMOR completely avoids the dense reduced-order system matrices, which in turn allows faster simulation and a smaller memory requirement if relatively lowrank approximations of these tensors exist. Numerical experiments on transient and periodic steady-state analyses confirm the superior accuracy and efficiency of TNMOR, particularly in highly nonlinear scenarios.
Virtual Probe: A Statistical Framework for Low-Cost Silicon Characterization of Nanoscale Integrated Circuits In this paper, we propose a new technique, referred to as virtual probe (VP), to efficiently measure, characterize, and monitor spatially-correlated inter-die and/or intra-die variations in nanoscale manufacturing process. VP exploits recent breakthroughs in compressed sensing to accurately predict spatial variations from an exceptionally small set of measurement data, thereby reducing the cost of silicon characterization. By exploring the underlying sparse pattern in spatial frequency domain, VP achieves substantially lower sampling frequency than the well-known Nyquist rate. In addition, VP is formulated as a linear programming problem and, therefore, can be solved both robustly and efficiently. Our industrial measurement data demonstrate the superior accuracy of VP over several traditional methods, including 2-D interpolation, Kriging prediction, and k-LSE estimation.
Calculation of Generalized Polynomial-Chaos Basis Functions and Gauss Quadrature Rules in Hierarchical Uncertainty Quantification Stochastic spectral methods are efficient techniques for uncertainty quantification. Recently they have shown excellent performance in the statistical analysis of integrated circuits. In stochastic spectral methods, one needs to determine a set of orthonormal polynomials and a proper numerical quadrature rule. The former are used as the basis functions in a generalized polynomial chaos expansion. The latter is used to compute the integrals involved in stochastic spectral methods. Obtaining such information requires knowing the density function of the random input a-priori. However, individual system components are often described by surrogate models rather than density functions. In order to apply stochastic spectral methods in hierarchical uncertainty quantification, we first propose to construct physically consistent closed-form density functions by two monotone interpolation schemes. Then, by exploiting the special forms of the obtained density functions, we determine the generalized polynomial-chaos basis functions and the Gauss quadrature rules that are required by a stochastic spectral simulator. The effectiveness of our proposed algorithm is verified by both synthetic and practical circuit examples.
Practical, fast Monte Carlo statistical static timing analysis: why and how Statistical static timing analysis (SSTA) has emerged as an essential tool for nanoscale designs. Monte Carlo methods are universally employed to validate the accuracy of the approximations made in all SSTA tools, but Monte Carlo itself is never employed as a strategy for practical SSTA. It is widely believed to be "too slow" -- despite an uncomfortable lack of rigorous studies to support this belief. We offer the first large-scale study to refute this belief. We synthesize recent results from fast quasi-Monte Carlo (QMC) deterministic sampling and efficient Karhunen-Loéve expansion (KLE) models of spatial correlation to show that Monte Carlo SSTA need not be slow. Indeed, we show for the ISCAS89 circuits, a few hundred, well-chosen sample points can achieve errors within 5%, with no assumptions on gate models, wire models, or the core STA engine, with runtimes less than 90 s.
A Stochastic Collocation Method for Elliptic Partial Differential Equations with Random Input Data Abstract In this paper we propose and analyze a Stochastic-Collocation method to solve elliptic Partial Difierential Equations with random,coe‐cients and forcing terms (input data of the model). The input data are assumed to depend on a flnite number of random,variables. The method consists in a Galerkin approximation in space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space and naturally leads to the solution of uncoupled deterministic prob- lems as in the Monte Carlo approach. It can be seen as a generalization of the Stochastic Galerkin method proposed in [Babu• ska -Tempone-Zouraris, SIAM J. Num. Anal. 42(2004)] and allows one to treat easily a wider range of situations, such as: input data that depend non-linearly on the random variables, difiusivity coe‐cients with unbounded second moments , random variables that are correlated or have unbounded support. We provide a rigorous convergence analysis and demonstrate exponential con- vergence of the \probability error" with respect of the number of Gauss points in each direction in the probability space, under some regularity assumptions on the random,input data. Numerical examples show the efiectiveness of the method. Key words: Collocation method, stochastic PDEs, flnite elements, un- certainty quantiflcation, exponential convergence. AMS subject classiflcation: 65N35, 65N15, 65C20
Efficient approximation of random fields for numerical applications This article is dedicated to the rapid computation of separable expansions for the approximation of random fields. We consider approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. Especially, we provide an a posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples are provided to validate and quantify the presented methods. Copyright (c) 2015 John Wiley & Sons, Ltd.
A model of fuzzy linguistic IRS based on multi-granular linguistic information An important question in IRSs is how to facilitate the IRS-user interaction, even more when the complexity of the fuzzy query language makes difficult to formulate user queries. The use of linguistic variables to represent the input and output information in the retrieval process of IRSs significantly improves the IRS-user interaction. In the activity of an IRS, there are aspects of different nature to be assessed, e.g., the relevance of documents, the importance of query terms, etc. Therefore, these aspects should be assessed with different uncertainty degrees, i.e., using several label sets with different granularity of uncertainty.
Virus propagation with randomness. Viruses are organisms that need to infect a host cell in order to reproduce. The new viruses leave the infected cell and look for other susceptible cells to infect. The mathematical models for virus propagation are very similar to population and epidemic models, and involve a relatively large number of parameters. These parameters are very difficult to establish with accuracy, while variability in the cell and virus populations and measurement errors are also to be expected. To deal with this issue, we consider the parameters to be random variables with given distributions. We use a non-intrusive variant of the polynomial chaos method to obtain statistics from the differential equations of two different virus models. The equations to be solved remain the same as in the deterministic case; thus no new computer codes need to be developed. Some examples are presented.
Efficient Decision-Making Scheme Based on LIOWAD. A new decision making method called linguistic induced ordered weighted averaging distance (LIOWAD) operator by using induced aggregation operators and linguistic information in the Hamming distance. This aggregation operator provides a parameterized family of linguistic aggregation operators that includes the maximum distance, the minimum distance, the linguistic normalized Hamming distance, the linguistic weighted Hamming distance and the linguistic ordered weighted averaging distance, among others. So give special attention to the analysis of different particular types of LIOWAD operators. End the paper with an application of the new approach in a decision making problem about selection of investments under linguistic environment.
1.20816
0.20816
0.20816
0.20816
0.10416
0.069467
0.029899
0.001977
0.000277
0.000016
0
0
0
0
Guaranteed passive balancing transformations for model order reduction The major concerns in state-of-the-art model reduction algorithms are: achieving accurate models of sufficiently small size, numerically stable and efficient generation of the models, and preservation of system properties such as passivity. Algorithms, such as PRIMA, generate guaranteed-passive models for systems with special internal structure, using numerically stable and efficient Krylov-subspace iterations. Truncated balanced realization (TBR) algorithms, as used to date in the design automation community, can achieve smaller models with better error control, but do not necessarily preserve passivity. In this paper, we show how to construct TBR-like methods that generate guaranteed passive reduced models and in addition are applicable to state-space systems with arbitrary internal structure.
Reduced-order modelling of linear time-varying systems We present a theory for reduced order modelling of linear time varying systems, together with efficient numerical methods for application to large systems. The technique, called TVP (Time-Varying Pade), is applicable to deterministic as well as noise analysis of many types of communication subsystems, such as mixers and switched capacitor filters, for which existing model reduction techniques cannot be used. TVP is therefore suitable for hierarchical verification of entire communication systems. We present practical applications in which TVP generates macromodels which are more than two orders of magnitude smaller but still replicate the input-output behaviour of the original systems accurately. The size reduction results in a speedup of more than 500.
Random Sampling of Moment Graph: A Stochastic Krylov-Reduction Algorithm In this paper we introduce a new algorithm for model order reduction in the presence of parameter or process variation. Our analysis is performed using a graph interpretation of the multi-parameter moment matching approach, leading to a computational technique based on random sampling of moment graph (RSMG). Using this technique, we have developed a new algorithm that combines the best aspects of recently proposed parameterized moment-matching and approximate TBR procedures. RSMG attempts to avoid both exponential growth of computational complexity and multiple matrix factorizations, the primary drawbacks of existing methods, and illustrates good ability to tailor algorithms to apply computational effort where needed. Industry examples are used to verify our new algorithms
SPARE: a scalable algorithm for passive, structure preserving, parameter-aware model order reduction This paper describes a flexible and efficient new algorithm for model order reduction of parameterized systems. The method is based on the reformulation of the parameterized system as a perturbation-like parallel interconnection of the nominal transfer function and the nonparameterized transfer function sensitivities with respect to the parameter variations. Such a formulation reveals an explicit dependence on each parameter which is exploited by reducing each component system independently via a standard nonparameterized structure preserving algorithm. Therefore, the resulting smaller size interconnected system retains the structure of the original system with respect to parameter dependence. This allows for better accuracy control, enabling independent adaptive order determination with respect to each parameter and adding flexibility in simulation environments. It is shown that the method is efficiently scalable and preserves relevant system properties such as passivity. The new technique can handle fairly large parameter variations on systems whose outputs exhibit smooth dependence on the parameters, also allowing design space exploration to some degree. Several examples show that besides the added flexibility and control, when compared with competing algorithms, the proposed technique can, in some cases, produce smaller reduced models with potential accuracy gains.
Phase Noise and Noise Induced Frequency Shift in Stochastic Nonlinear Oscillators Phase noise plays an important role in the performances of electronic oscillators. Traditional approaches describe the phase noise problem as a purely diffusive process. In this paper we develop a novel phase model reduction technique for phase noise analysis of nonlinear oscillators subject to stochastic inputs. We obtain analytical equations for both the phase deviation and the probability density function of the phase deviation. We show that, in general, the phase reduced models include non Markovian terms. Under the Markovian assumption, we demonstrate that the effect of white noise is to generate both phase diffusion and a frequency shift, i.e. phase noise is best described as a convection-diffusion process. The analysis of a solvable model shows the accuracy of our theory, and that it gives better predictions than traditional phase models.
Algorithmic Macromodelling Methods for Mixed-Signal Systems Electronic systems today, especially those for communications and sensing, are typically composed of a complex mix of digital and mixed-signal circuit blocks. Verifying such systems prior to fabrication is challenging due to their size and complexity. Automated model generation is becoming an increasingly important component of methodologies for effective system verification. In this paper, we review algorithmically-based model generation methods for linear and nonlinear systems. We comment on the development of such macromodelling methods over the last decade, clarify their domains of application and evaluate their strengths and current limitations.
A reliable and efficient procedure for oscillator PPV computation, with phase noise macromodeling applications The main effort in oscillator phase noise calculation and macromodeling lies in computing a vector function called the perturbation projection vector (PPV). Current techniques for PPV calculation use time-domain numerics to generate the system's monodromy matrix, followed by full or partial eigenanalysis. We present superior methods that find the PPV using only a single linear solution of the oscillator's time- or frequency-domain steady-state Jacobian matrix. The new methods are better suited for implementation in existing tools with harmonic balance or shooting capabilities (especially those incorporating "fast" variants), and can also be more accurate than explicit eigenanalysis. A key advantage is that they dispense with the need to select the correct one eigenfunction from amongst a potentially large set of choices, an issue that explicit eigencalculation-based methods have to face. We illustrate the new methods in detail using LC and ring oscillators.
Parameterized model order reduction of nonlinear dynamical systems In this paper we present a parameterized reduction technique for non-linear systems. Our approach combines an existing non-parameterized trajectory piecewise linear method for non-linear systems, with an existing moment matching parameterized technique for linear systems. Results and comparisons are presented for two examples: an analog non-linear circuit, and a MEM switch.
Impact of interconnect variations on the clock skew of a gigahertz microprocessor Due to the large die sizes and tight relative clock skew margins, the impact of interconnect manufacturing variations on the clock skew in today's gigahertz microprocessors can no longer be ignored. Unlike manufacturing variations in the devices, the impact of the interconnect manufacturing variations on IC timing performance cannot be captured by worst/best case corner point methods. Thus it is difficult to estimate the clock skew variability due to interconnect variations. In this paper we analyze the timing impact of several key statistically independent interconnect variations in a context-dependent manner by applying a previously reported interconnect variational order-reduction technique. The results show that the interconnect variations can cause up to 25% clock skew variability in a modern microprocessor design.
Practical, fast Monte Carlo statistical static timing analysis: why and how Statistical static timing analysis (SSTA) has emerged as an essential tool for nanoscale designs. Monte Carlo methods are universally employed to validate the accuracy of the approximations made in all SSTA tools, but Monte Carlo itself is never employed as a strategy for practical SSTA. It is widely believed to be "too slow" -- despite an uncomfortable lack of rigorous studies to support this belief. We offer the first large-scale study to refute this belief. We synthesize recent results from fast quasi-Monte Carlo (QMC) deterministic sampling and efficient Karhunen-Loéve expansion (KLE) models of spatial correlation to show that Monte Carlo SSTA need not be slow. Indeed, we show for the ISCAS89 circuits, a few hundred, well-chosen sample points can achieve errors within 5%, with no assumptions on gate models, wire models, or the core STA engine, with runtimes less than 90 s.
Test Metrics Model for Analog Test Development The trend nowadays is to integrate more and more functionalities into a single chip. This, however, has serious implications in the testing cost. Especially for the analog circuits, the testing cost tends to be very high, despite the fact they occupy a small fraction of the area of the chip. Therefore, to reduce this cost, there is a high interest to replace the most demanding tests by alternative measurements. However, such replacement may inadvertently result in accepting faulty chips or rejecting functional chips. In this paper, we present a method for estimating such test metrics in the general scenario where a single test is replaced by a single measurement. The method is based on the extreme value theory and the statistical blockade algorithm. It can be readily applied during the test development phase to obtain estimates of the test metrics and corresponding confidence intervals with parts-per-million precision. For this purpose, the method requires a small number of selective simulations that we can afford to run in practice.
Extension principles for interval-valued intuitionistic fuzzy sets and algebraic operations The Atanassov's intuitionistic fuzzy (IF) set theory has become a popular topic of investigation in the fuzzy set community. However, there is less investigation on the representation of level sets and extension principles for interval-valued intuitionistic fuzzy (IVIF) sets as well as algebraic operations. In this paper, firstly the representation theorem of IVIF sets is proposed by using the concept of level sets. Then, the extension principles of IVIF sets are developed based on the representation theorem. Finally, the addition, subtraction, multiplication and division operations over IVIF sets are defined based on the extension principle. The representation theorem and extension principles as well as algebraic operations form an important part of Atanassov's IF set theory.
Qualitative spatial reasoning: a semi-quantitative approach using fuzzy logic Qualitative reasoning is useful as it facilitates reasoning with incomplete and weak information and aids the subsequent application of more detailed quantitative theories. Adoption of qualitative techniques for spatial reasoning can be very useful in situations where it is difficult to obtain precise informationand where there are real constraints of memory, time and hostile threats. This paper formulates a computational model for obtaining all induced spatial constraints on a set of landmarks, given a set of approximate quantitative and qualitative constraints on them, which may be incomplete, and perhaps even conflicting.
A game-theoretic multipath routing for video-streaming services over Mobile Ad Hoc Networks The number of portable devices capable of maintaining wireless communications has increased considerably in the last decade. Such mobile nodes may form a spontaneous self-configured network connected by wireless links to constitute a Mobile Ad Hoc Network (MANET). As the number of mobile end users grows the demand of multimedia services, such as video-streaming, in such networks is envisioned to increase as well. One of the most appropriate video coding technique for MANETs is layered MPEG-2 VBR, which used with a proper multipath routing scheme improves the distribution of video streams. In this article we introduce a proposal called g-MMDSR (game theoretic-Multipath Multimedia Dynamic Source Routing), a cross-layer multipath routing protocol which includes a game theoretic approach to achieve a dynamic selection of the forwarding paths. The proposal seeks to improve the own benefits of the users whilst using the common scarce resources efficiently. It takes into account the importance of the video frames in the decoding process, which outperforms the quality of the received video. Our scheme has proved to enhance the performance of the framework and the experience of the end users. Simulations have been carried out to show the benefits of our proposal under different situations where high interfering traffic and mobility of the nodes are present.
1.013966
0.010041
0.008728
0.008706
0.007451
0.007143
0.003801
0.001994
0.000355
0.000054
0
0
0
0
Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the $k$ dominant components of the singular value decomposition of an $m \times n$ matrix. (i) For a dense input matrix, randomized algorithms require $\bigO(mn \log(k))$ floating-point operations (flops) in contrast to $ \bigO(mnk)$ for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to $\bigO(k)$ passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
A low-rank approach to the computation of path integrals We present a method for solving the reaction–diffusion equation with general potential in free space. It is based on the approximation of the Feynman–Kac formula by a sequence of convolutions on sequentially diminishing grids. For computation of the convolutions we propose a fast algorithm based on the low-rank approximation of the Hankel matrices. The algorithm has complexity of O(nrMlog⁡M+nr2M) flops and requires O(Mr) floating-point numbers in memory, where n is the dimension of the integral, r≪n, and M is the mesh size in one dimension. The presented technique can be generalized to the higher-order diffusion processes.
Efficient Computation of Highly Oscillatory Integrals by Using QTT Tensor Approximation. We propose a new method tor the efficient approximation of a class of highly oscillatory weighted integrals where the oscillatory function depends on the frequency parameter w > 0, typically varying in a large interval. Our approach is based, for a fixed hut arbitrary oscillator, on the pre -computation and low parametric approximation of certain w -dependent prototype functions whose evaluation leads in a straightforward way to recover the target integral. The difficulty that arises is that these prototype functions consist of oscillatory integrals which makes them difficult to evaluate. Furthermore, they have to be approximated typically in large intervals. Here we use the quantized -tensor train (QTT) approximation method for functional M -vectors of logarithmic complexity in M in combination with a cross -approximation scheme for TT tensors. This allows the accurate approximation and efficient storage of these functions in the wide range of grid and frequency parameters. Numerical examples illustrate the efficiency of the QTT-based numerical integration scheme on various examples in one and several spatial dimensions.
Informative Sensing Compressed sensing is a recent set of mathematical results showing that sparse signals can be exactly reconstructed from a small number of linear measurements. Interestingly, for ideal sparse signals with no measurement noise, random measurements allow perfect reconstruction while measurements based on principal component analysis (PCA) or independent component analysis (ICA) do not. At the same time, for other signal and noise distributions, PCA and ICA can significantly outperform random projections in terms of enabling reconstruction from a small number of measurements. In this paper we ask: given the distribution of signals we wish to measure, what are the optimal set of linear projections for compressed sensing? We consider the problem of finding a small number of l inear projections that are maximally informative about the signal. Formally, we use the InfoMax criterion and seek to maximize the mutual information between the signal, x, and the (possibly noisy) projection y = Wx. We show that in general the optimal projections are not the principal components of the data nor random projections, but rather a seemingly novel set of projections that capture what is still uncertain about the signal, given the knowledge of distribution. We present analytic solutions for certain special cases including natural images. In particular, for natural images, the near-optimal projec tions are bandwise random, i.e., incoherent to the sparse bases at a particular frequency band but with more weights on the low-frequencies, which has a physical relation to the multi-resolution representatio n of images.
Multi-fidelity Gaussian process regression for prediction of random fields. We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgers equation and the stochastic OberbeckBoussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.
Tensor Decompositions for Signal Processing Applications: From two-way to multiway component analysis. The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train.
Parameter and State Model Reduction for Large-Scale Statistical Inverse Problems A greedy algorithm for the construction of a reduced model with reduction in both parameter and state is developed for an efficient solution of statistical inverse problems governed by partial differential equations with distributed parameters. Large-scale models are too costly to evaluate repeatedly, as is required in the statistical setting. Furthermore, these models often have high-dimensional parametric input spaces, which compounds the difficulty of effectively exploring the uncertainty space. We simultaneously address both challenges by constructing a projection-based reduced model that accepts low-dimensional parameter inputs and whose model evaluations are inexpensive. The associated parameter and state bases are obtained through a greedy procedure that targets the governing equations, model outputs, and prior information. The methodology and results are presented for groundwater inverse problems in one and two dimensions.
Convergence Rates of Best N-term Galerkin Approximations for a Class of Elliptic sPDEs Deterministic Galerkin approximations of a class of second order elliptic PDEs with random coefficients on a bounded domain D⊂ℝd are introduced and their convergence rates are estimated. The approximations are based on expansions of the random diffusion coefficients in L 2(D)-orthogonal bases, and on viewing the coefficients of these expansions as random parameters y=y(ω)=(y i (ω)). This yields an equivalent parametric deterministic PDE whose solution u(x,y) is a function of both the space variable x∈D and the in general countably many parameters y. We establish new regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to $V=H^{1}_{0}(D)$. These results lead to analytic estimates on the V norms of the coefficients (which are functions of x) in a so-called “generalized polynomial chaos” (gpc) expansion of u. Convergence estimates of approximations of u by best N-term truncated V valued polynomials in the variable y∈U are established. These estimates are of the form N −r , where the rate of convergence r depends only on the decay of the random input expansion. It is shown that r exceeds the benchmark rate 1/2 afforded by Monte Carlo simulations with N “samples” (i.e., deterministic solves) under mild smoothness conditions on the random diffusion coefficients. A class of fully discrete approximations is obtained by Galerkin approximation from a hierarchic family $\{V_{l}\}_{l=0}^{\infty}\subset V$of finite element spaces in D of the coefficients in the N-term truncated gpc expansions of u(x,y). In contrast to previous works, the level l of spatial resolution is adapted to the gpc coefficient. New regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to a smoothness space W⊂V are established leading to analytic estimates on the W norms of the gpc coefficients and on their space discretization error. The space W coincides with $H^{2}(D)\cap H^{1}_{0}(D)$in the case where D is a smooth or convex domain. Our analysis shows that in realistic settings a convergence rate $N_{\mathrm{dof}}^{-s}$in terms of the total number of degrees of freedom N dof can be obtained. Here the rate s is determined by both the best N-term approximation rate r and the approximation order of the space discretization in D.
Is Gauss Quadrature Better than Clenshaw-Curtis? We compare the convergence behavior of Gauss quadrature with that of its younger brother, Clenshaw-Curtis. Seven-line MATLAB codes are presented that implement both methods, and experiments show that the supposed factor-of-2 advantage of Gauss quadrature is rarely realized. Theorems are given to explain this effect. First, following O'Hara and Smith in the 1960s, the phenomenon is explained as a consequence of aliasing of coefficients in Chebyshev expansions. Then another explanation is offered based on the interpretation of a quadrature formula as a rational approximation of $\log((z+1)/(z-1))$ in the complex plane. Gauss quadrature corresponds to Padé approximation at $z=\infty$. Clenshaw-Curtis quadrature corresponds to an approximation whose order of accuracy at $z=\infty$ is only half as high, but which is nevertheless equally accurate near $[-1,1]$.
Learning with tensors: a framework based on convex optimization and spectral regularization We present a framework based on convex optimization and spectral regularization to perform learning when feature observations are multidimensional arrays (tensors). We give a mathematical characterization of spectral penalties for tensors and analyze a unifying class of convex optimization problems for which we present a provably convergent and scalable template algorithm. We then specialize this class of problems to perform learning both in a transductive as well as in an inductive setting. In the transductive case one has an input data tensor with missing features and, possibly, a partially observed matrix of labels. The goal is to both infer the missing input features as well as predict the missing labels. For induction, the goal is to determine a model for each learning task to be used for out of sample prediction. Each training pair consists of a multidimensional array and a set of labels each of which corresponding to related but distinct tasks. In either case the proposed technique exploits precise low multilinear rank assumptions over unknown multidimensional arrays; regularization is based on composite spectral penalties and connects to the concept of Multilinear Singular Value Decomposition. As a by-product of using a tensor-based formalism, our approach allows one to tackle the multi-task case in a natural way. Empirical studies demonstrate the merits of the proposed methods.
Sharp thresholds for high-dimensional and noisy recovery of sparsity The problem of consistently estimating the sparsity pattern of a vector β� 2 Rp based on observa- tions contaminated by noise arises in various contexts, including subset selection in regression, structure estima- tion in graphical models, sparse approximation, and sig- nal denoising. Unfortunately, the natural optimization- theoretic formulation involves ℓ0 constraints, which leads to NP-hard problems in general; this intractability mo- tivates the use of relaxations based on ℓ1 constraints. We analyze the behavior of ℓ1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern. Our main result is to establish a sharp relation between the problem di- mension p, the number s of non-zero elements in β�, and the number of observations n that are required for reliable recovery. For a broad class of Gaussian ensembles satisfying mutual incoherence conditions, we establish existence and compute explicit values of thresh- olds θℓ and θu with the following properties: for any ν > 0, if n > 2 s(θu + ν) log(p s) + s + 1, then the Lasso succeeds in recovering the sparsity pattern with probability converging to one for large problems, whereas for n < 2 s(θℓ ν) log(p s) + s + 1, then the probability of successful recovery converges to zero. For the special case of the uniform Gaussian ensemble, we show that θℓ = θu = 1, so that the threshold is sharp and exactly determined.
Speaker Verification Using Adapted Gaussian Mixture Models Reynolds, Douglas A., Quatieri, Thomas F., and Dunn, Robert B., Speaker Verification Using Adapted Gaussian Mixture Models, Digital Signal Processing10(2000), 19 41.In this paper we describe the major elements of MIT Lincoln Laboratory's Gaussian mixture model (GMM)-based speaker verification system used successfully in several NIST Speaker Recognition Evaluations (SREs). The system is built around the likelihood ratio test for verification, using simple but effective GMMs for likelihood functions, a universal background model (UBM) for alternative speaker representation, and a form of Bayesian adaptation to derive speaker models from the UBM. The development and use of a handset detector and score normalization to greatly improve verification performance is also described and discussed. Finally, representative performance benchmarks and system behavior experiments on NIST SRE corpora are presented.
Computation of equilibrium measures. We present a new way of computing equilibrium measures numerically, based on the Riemann–Hilbert formulation. For equilibrium measures whose support is a single interval, the simple algorithm consists of a Newton–Raphson iteration where each step only involves fast cosine transforms. The approach is then generalized for multiple intervals.
Interval type-2 fuzzy neural network control for X-Y-Theta motion control stage using linear ultrasonic motors An interval type-2 fuzzy neural network (IT2FNN) control system is proposed to control the position of an X-Y-Theta (X-Y-@q) motion control stage using linear ultrasonic motors (LUSMs) to track various contours. The IT2FNN, which combines the merits of interval type-2 fuzzy logic system (FLS) and neural network, is developed to simplify the computation and to confront the uncertainties of the X-Y-@q motion control stage. Moreover, the parameter learning of the IT2FNN based on the supervised gradient descent method is performed on line. The experimental results show that the tracking performance of the IT2FNN is significantly improved compared to type-1 FNN.
1.012834
0.013333
0.013333
0.011111
0.011111
0.005556
0.002287
0.000374
0.000079
0.000011
0.000001
0
0
0
Characterizing the elements of Earth's radiative budget: Applying uncertainty quantification to the CESM. Understanding and characterizing sources of uncertainty in climate modeling is an important task. Because of the ever increasing sophistication and resolution of climate modeling it is increasingly important to develop uncertainty quantification methods that minimize the computational cost that occurs when these methods are added to climate modeling. This research explores the application of sparse stochastic collocation with polynomial edge detection to characterize portions of the probability space associated with the Earth's radiative budget in the Community Earth System Model (CESM). Specifically, we develop surrogate models with error estimates for a range of acceptable input parameters that predict statistical values of the Earth's radiative budget as derived from the CESM simulation. We extend these results in resolution from T31 to T42 and in parameter space increasing the degrees of freedom from two to three.
Discontinuity detection in multivariate space for stochastic simulations Edge detection has traditionally been associated with detecting physical space jump discontinuities in one dimension, e.g. seismic signals, and two dimensions, e.g. digital images. Hence most of the research on edge detection algorithms is restricted to these contexts. High dimension edge detection can be of significant importance, however. For instance, stochastic variants of classical differential equations not only have variables in space/time dimensions, but additional dimensions are often introduced to the problem by the nature of the random inputs. The stochastic solutions to such problems sometimes contain discontinuities in the corresponding random space and a prior knowledge of jump locations can be very helpful in increasing the accuracy of the final solution. Traditional edge detection methods typically require uniform grid point distribution. They also often involve the computation of gradients and/or Laplacians, which can become very complicated to compute as the number of dimensions increases. The polynomial annihilation edge detection method, on the other hand, is more flexible in terms of its geometric specifications and is furthermore relatively easy to apply. This paper discusses the numerical implementation of the polynomial annihilation edge detection method to high dimensional functions that arise when solving stochastic partial differential equations.
Multi-Element Generalized Polynomial Chaos for Arbitrary Probability Measures We develop a multi-element generalized polynomial chaos (ME-gPC) method for arbitrary probability measures and apply it to solve ordinary and partial differential equations with stochastic inputs. Given a stochastic input with an arbitrary probability measure, its random space is decomposed into smaller elements. Subsequently, in each element a new random variable with respect to a conditional probability density function (PDF) is defined, and a set of orthogonal polynomials in terms of this random variable is constructed numerically. Then, the generalized polynomial chaos (gPC) method is implemented element-by-element. Numerical experiments show that the cost for the construction of orthogonal polynomials is negligible compared to the total time cost. Efficiency and convergence of ME-gPC are studied numerically by considering some commonly used random variables. ME-gPC provides an efficient and flexible approach to solving differential equations with random inputs, especially for problems related to long-term integration, large perturbation, and stochastic discontinuities.
High-Order Collocation Methods for Differential Equations with Random Inputs Recently there has been a growing interest in designing efficient methods for the solution of ordinary/partial differential equations with random inputs. To this end, stochastic Galerkin methods appear to be superior to other nonsampling methods and, in many cases, to several sampling methods. However, when the governing equations take complicated forms, numerical implementations of stochastic Galerkin methods can become nontrivial and care is needed to design robust and efficient solvers for the resulting equations. On the other hand, the traditional sampling methods, e.g., Monte Carlo methods, are straightforward to implement, but they do not offer convergence as fast as stochastic Galerkin methods. In this paper, a high-order stochastic collocation approach is proposed. Similar to stochastic Galerkin methods, the collocation methods take advantage of an assumption of smoothness of the solution in random space to achieve fast convergence. However, the numerical implementation of stochastic collocation is trivial, as it requires only repetitive runs of an existing deterministic solver, similar to Monte Carlo methods. The computational cost of the collocation methods depends on the choice of the collocation points, and we present several feasible constructions. One particular choice, based on sparse grids, depends weakly on the dimensionality of the random space and is more suitable for highly accurate computations of practical applications with large dimensional random inputs. Numerical examples are presented to demonstrate the accuracy and efficiency of the stochastic collocation methods.
Performance evaluation of generalized polynomial chaos In this paper we review some applications of generalized polynomial chaos expansion for uncertainty quantification. The mathematical framework is presented and the convergence of the method is demonstrated for model problems. In particular, we solve the first-order and second-order ordinary differential equations with random parameters, and examine the efficiency of generalized polynomial chaos compared to Monte Carlo simulations. It is shown that the generalized polynomial chaos can be orders of magnitude more efficient than Monte Carlo simulations when the dimensionality of random input is low, e.g. for correlated noise.
Spectral Polynomial Chaos Solutions of the Stochastic Advection Equation We present a new algorithm based on Wiener–Hermite functionals combined with Fourier collocation to solve the advection equation with stochastic transport velocity. We develop different stategies of representing the stochastic input, and demonstrate that this approach is orders of magnitude more efficient than Monte Carlo simulations for comparable accuracy.
Algorithm 672: generation of interpolatory quadrature rules of the highest degree of precision with preassigned nodes for general weight functions
An algorithm for pronominal anaphora resolution This paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors (reflexives and reciprocals). The algorithm applies to the syntactic representations generated by McCord's Slot Grammar parser and relies on salience measures derived from syntactic structure and a simple dynamic model of attentional state. Like the parser, the algorithm is implemented in Prolog. The authors have tested it extensively on computer manual texts and conducted a blind test on manual text containing 360 pronoun occurrences. The algorithm successfully identifies the antecedent of the pronoun for 86% of these pronoun occurrences. The relative contributions of the algorithm's components to its overall success rate in this blind test are examined. Experiments were conducted with an enhancement of the algorithm that contributes statistically modelled information concerning semantic and real-world relations to the algorithm's decision procedure. Interestingly, this enhancement only marginally improves the algorithm's performance (by 2%). The algorithm is compared with other approaches to anaphora resolution that have been proposed in the literature. In particular, the search procedure of Hobbs' algorithm was implemented in the Slot Grammar framework and applied to the sentences in teh blind test set. The authors' algorithm achieves a higher rate of success (4%) than Hobbs' algorithm. The relation of the algorithm to the centering approach is discussed, as well as to models of anaphora resolution that invoke a variety of informational factors in ranking antecedent candidates.
Cooperative spectrum sensing in cognitive radio networks: A survey Spectrum sensing is a key function of cognitive radio to prevent the harmful interference with licensed users and identify the available spectrum for improving the spectrum's utilization. However, detection performance in practice is often compromised with multipath fading, shadowing and receiver uncertainty issues. To mitigate the impact of these issues, cooperative spectrum sensing has been shown to be an effective method to improve the detection performance by exploiting spatial diversity. While cooperative gain such as improved detection performance and relaxed sensitivity requirement can be obtained, cooperative sensing can incur cooperation overhead. The overhead refers to any extra sensing time, delay, energy, and operations devoted to cooperative sensing and any performance degradation caused by cooperative sensing. In this paper, the state-of-the-art survey of cooperative sensing is provided to address the issues of cooperation method, cooperative gain, and cooperation overhead. Specifically, the cooperation method is analyzed by the fundamental components called the elements of cooperative sensing, including cooperation models, sensing techniques, hypothesis testing, data fusion, control channel and reporting, user selection, and knowledge base. Moreover, the impacting factors of achievable cooperative gain and incurred cooperation overhead are presented. The factors under consideration include sensing time and delay, channel impairments, energy efficiency, cooperation efficiency, mobility, security, and wideband sensing issues. The open research challenges related to each issue in cooperative sensing are also discussed.
Mathematical Foundations of Computer Science 1989, MFCS'89, Porabka-Kozubnik, Poland, August 28 - September 1, 1989, Proceedings
Real-time compressive tracking It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. While much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, these mis-aligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from the multi-scale image feature space with data-independent basis. Our appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is adopted to efficiently extract the features for the appearance model. We compress samples of foreground targets and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art algorithms on challenging sequences in terms of efficiency, accuracy and robustness.
A neural fuzzy system with fuzzy supervised learning A neural fuzzy system learning with fuzzy training data (fuzzy if-then rules) is proposed in this paper. This system is able to process and learn numerical information as well as linguistic information. At first, we propose a five-layered neural network for the connectionist realization of a fuzzy inference system. The connectionist structure can house fuzzy logic rules and membership functions for fuzzy inference. We use α-level sets of fuzzy numbers to represent linguistic information. The inputs, outputs, and weights of the proposed network can be fuzzy numbers of any shape. Furthermore, they can be hybrid of fuzzy numbers and numerical numbers through the use of fuzzy singletons. Based on interval arithmetics, a fuzzy supervised learning algorithm is developed for the proposed system. It extends the normal supervised learning techniques to the learning problems where only linguistic teaching signals are available. The fuzzy supervised learning scheme can train the proposed system with desired fuzzy input-output pairs which are fuzzy numbers instead of the normal numerical values. With fuzzy supervised learning, the proposed system can be used for rule base concentration to reduce the number of rules in a fuzzy rule base. Simulation results are presented to illustrate the performance and applicability of the proposed system
Computing with Curvelets: From Image Processing to Turbulent Flows The curvelet transform allows an almost optimal nonadaptive sparse representation for curve-like features and edges. The authors describe some recent applications involving image processing, seismic data exploration, turbulent flows, and compressed sensing.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.105263
0.012281
0.001491
0.000729
0.000035
0.000008
0
0
0
0
0
0
0
0
Interval-Valued Linguistic Variables: An Application To The L-Fuzzy Contexts With Absent Values The goal of this work is to extend the concept of the linguistic variable to the interval-valued case since there are some situations where their application would be justified, as can be seen in this paper. After a brief introduction on fuzzy numbers and linguistic variables in [0, 1], we define the interval-valued linguistic variables and we study their behaviour through three properties. In the second part of the paper, we show their utility for replacing the absent values in an L-fuzzy context.
Fuzzy Concept Lattices Constrained By Hedges We study concept lattices constrained by hedges. The principal aim is to control, in a parameterical way, the size of concept lattices, i.e. the number of conceptual clusters extracted from data. The paper presents theoretical insight, comments, and examples. We introduce new, parameterized, concept-forming operators and study their properties. We obtain an axiomatic characterization of the concept-forming operators. Then, we show that a concept lattice with hedges is indeed a complete lattice which is isomorphic to an ordinary concept lattice. We describe the isomorphism and its inverse. These mappings serve as translation procedures. As a consequence, we obtain a theorem characterizing the structure of concept lattices with hedges which generalizes the well-known main theorem of ordinary concept lattices. Furthermore, the isomorphism and its inverse enable us to compute a concept lattice with hedges using algorithms for ordinary concept lattices. Further insight is provided for boundary choices of hedges. We demonstrate by experiments that the size reduction using hedges as parameters is smooth.
Contrast of a fuzzy relation In this paper we address a key problem in many fields: how a structured data set can be analyzed in order to take into account the neighborhood of each individual datum. We propose representing the dataset as a fuzzy relation, associating a membership degree with each element of the relation. We then introduce the concept of interval-contrast, a means of aggregating information contained in the immediate neighborhood of each element of the fuzzy relation. The interval-contrast measures the range of membership degrees present in each neighborhood. We use interval-contrasts to define the necessary properties of a contrast measure, construct several different local contrast and total contrast measures that satisfy these properties, and compare our expressions to other definitions of contrast appearing in the literature. Our theoretical results can be applied to several different fields. In an Appendix A, we apply our contrast expressions to photographic images.
Unified full implication algorithms of fuzzy reasoning This paper discusses the full implication inference of fuzzy reasoning. For all residuated implications induced by left continuous t-norms, unified @a-triple I algorithms are constructed to generalize the known results. As the corollaries of the main results of this paper, some special algorithms can be easily derived based on four important residuated implications. These algorithms would be beneficial to applications of fuzzy reasoning. Based on properties of residuated implications, the proofs of the many conclusions are greatly simplified.
Optimistic and pessimistic decision making with dissonance reduction using interval-valued fuzzy sets Interval-valued fuzzy sets have been developed and applied to multiple criteria analysis. However, the influence of optimism and pessimism on subjective judgments and the cognitive dissonance that accompanies the decision making process have not been studied thoroughly. This paper presents a new method to reduce cognitive dissonance and to relate optimism and pessimism in multiple criteria decision analysis in an interval-valued fuzzy decision environment. We utilized optimistic and pessimistic point operators to measure the effects of optimism and pessimism, respectively, and further determined a suitability function through weighted score functions. Considering the two objectives of maximal suitability and dissonance reduction, several optimization models were constructed to obtain the optimal weights for the criteria and to determine the corresponding degree of suitability for alternative rankings. Finally, an empirical study was conducted to validate the feasibility and applicability of the current method. We anticipate that the proposed method can provide insight on the influences of optimism, pessimism, and cognitive dissonance in decision analysis studies.
General formulation of formal grammars By extracting the basic properties common to the formal grammars appeared in existing literatures, we develop a general formulation of formal grammars. We define a pseudo grammar and derive from it the well-known probabilistic, fuzzy grammars and so on. Moreover, several interesting grammars such as ⊔∗ grammars, ⊔ ⊓ grammars, ⊔ ⊓ grammars, composite B-fuzzy grammars, and mixed fuzzy grammars, which have never appeared in any other papers before, are derived.
Similarity relations and fuzzy orderings. The notion of ''similarity'' as defined in this paper is essentially a generalization of the notion of equivalence. In the same vein, a fuzzy ordering is a generalization of the concept of ordering. For example, the relation x @? y (x is much larger than y) is a fuzzy linear ordering in the set of real numbers. More concretely, a similarity relation, S, is a fuzzy relation which is reflexive, symmetric, and transitive. Thus, let x, y be elements of a set X and @m"s(x,y) denote the grade of membership of the ordered pair (x,y) in S. Then S is a similarity relation in X if and only if, for all x, y, z in X, @m"s(x,x) = 1 (reflexivity), @m"s(x,y) = @m"s(y,x) (symmetry), and @m"s(x,z) = @? (@m"s(x,y) A @m"s(y,z)) (transitivity), where @? and A denote max and min, respectively. ^y A fuzzy ordering is a fuzzy relation which is transitive. In particular, a fuzzy partial ordering, P, is a fuzzy ordering which is reflexive and antisymmetric, that is, (@m"P(x,y) 0 and x y) @? @m"P(y,x) = 0. A fuzzy linear ordering is a fuzzy partial ordering in which x y @? @m"s(x,y) 0 or @m"s(y,x) 0. A fuzzy preordering is a fuzzy ordering which is reflexive. A fuzzy weak ordering is a fuzzy preordering in which x y @? @m"s(x,y) 0 or @m"s(y,x) 0. Various properties of similarity relations and fuzzy orderings are investigated and, as an illustration, an extended version of Szpilrajn's theorem is proved.
Universally composable security: a new paradigm for cryptographic protocols We propose a novel paradigm for defining security of cryptographic protocols, called universally composable security. The salient property of universally composable definitions of security is that they guarantee security even when a secure protocol is composed of an arbitrary set of protocols, or more generally when the protocol is used as a component of an arbitrary system. This is an essential property for maintaining security of cryptographic protocols in complex and unpredictable environments such as the Internet. In particular, universally composable definitions guarantee security even when an unbounded number of protocol instances are executed concurrently in an adversarially controlled manner, they guarantee non-malleability with respect to arbitrary protocols, and more. We show how to formulate universally composable definitions of security for practically any cryptographic task. Furthermore, we demonstrate that practically any such definition can be realized using known techniques, as long as only a minority of the participants are corrupted. We then proceed to formulate universally composable definitions of a wide array of cryptographic tasks, including authenticated and secure communication, key-exchange, public-key encryption, signature, commitment, oblivious transfer, zero knowledge and more. We also make initial steps towards studying the realizability of the proposed definitions in various settings.
Computing with words in decision making: foundations, trends and prospects Computing with Words (CW) methodology has been used in several different environments to narrow the differences between human reasoning and computing. As Decision Making is a typical human mental process, it seems natural to apply the CW methodology in order to create and enrich decision models in which the information that is provided and manipulated has a qualitative nature. In this paper we make a review of the developments of CW in decision making. We begin with an overview of the CW methodology and we explore different linguistic computational models that have been applied to the decision making field. Then we present an historical perspective of CW in decision making by examining the pioneer papers in the field along with its most recent applications. Finally, some current trends, open questions and prospects in the topic are pointed out.
Impact of interconnect variations on the clock skew of a gigahertz microprocessor Due to the large die sizes and tight relative clock skew margins, the impact of interconnect manufacturing variations on the clock skew in today's gigahertz microprocessors can no longer be ignored. Unlike manufacturing variations in the devices, the impact of the interconnect manufacturing variations on IC timing performance cannot be captured by worst/best case corner point methods. Thus it is difficult to estimate the clock skew variability due to interconnect variations. In this paper we analyze the timing impact of several key statistically independent interconnect variations in a context-dependent manner by applying a previously reported interconnect variational order-reduction technique. The results show that the interconnect variations can cause up to 25% clock skew variability in a modern microprocessor design.
Estimation of (near) low-rank matrices with noise and high-dimensional scaling We study an instance of high-dimensional inference in which the goal is to estimate a matrix circle minus* is an element of R-m1xm2 on the basis of N noisy observations. The unknown matrix circle minus* is assumed to be either exactly low rank, or "near" low-rank, meaning that it can be well-approximated by a matrix with low rank. We consider a standard M-estimator based on regularization by the nuclear or trace norm over matrices, and analyze its performance under high-dimensional scaling. We define the notion of restricted strong convexity (RSC) for the loss function, and use it to derive nonasymptotic bounds on the Frobenius norm error that hold for a general class of noisy observation models, and apply to both exactly low-rank and approximately low rank matrices. We then illustrate consequences of this general theory for a number of specific matrix models, including low-rank multivariate or multi-task regression, system identification in vector autoregressive processes and recovery of low-rank matrices from random projections. These results involve nonasymptotic random matrix theory to establish that the RSC condition holds, and to determine an appropriate choice of regularization parameter. Simulation results show excellent agreement with the high-dimensional scaling of the error predicted by our theory.
User impatience and network performance In this work, we analyze from passive measurements the correlations between the user-induced interruptions of TCP connections and different end-to-end performance metrics. The aim of this study is to assess the possibility for a network operator to take into account the customers' experience for network monitoring. We first observe that the usual connection-level performance metrics of the interrupted connections are not very different, and sometimes better than those of normal connections. However, the request-level performance metrics show stronger correlations between the interruption rates and the network quality-of-service. Furthermore, we show that the user impatience could also be used to characterize the relative sensitivity of data applications to various network performance metrics.
A model to perform knowledge-based temporal abstraction over multiple signals In this paper we propose the Multivariable Fuzzy Temporal Profile model (MFTP), which enables the projection of expert knowledge on a physical system over a computable description. This description may be used to perform automatic abstraction on a set of parameters that represent the temporal evolution of the system. This model is based on the constraint satisfaction problem (CSP)formalism, which enables an explicit representation of the knowledge, and on fuzzy set theory, from which it inherits the ability to model the imprecision and uncertainty that are characteristic of human knowledge vagueness. We also present an application of the MFTP model to the recognition of landmarks in mobile robotics, specifically to the detection of doors on ultrasound sensor signals from a Nomad 200 robot.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.22
0.088
0.005
0.001
0.000357
0.000015
0
0
0
0
0
0
0
0
Electronic marketplaces and innovation: the Canadian experience This paper examines electronic marketplaces as one "digital economy" innovation. Great expectations existed for electronic marketplaces in the late 1990s, leading to the establishment of hundreds of new venues for B2B buying and selling. It was feared that the improved efficiency over traditional market mechanisms meant existing business relationships and methods were doomed. Another concern was voiced; namely that smaller nations and peripheral regions would lose trade to electronic marketplaces in central locations. These issues are examined in a study of electronic marketplace innovation in Canada, leading to an assessment of their prospects there, as well as more generally.
The Scientific Community Metaphor Scientific communities have proven to be extremely successful at solving problems. They are inherently parallel systems and their macroscopic nature makes them amenable to careful study. In this paper the character of scientific research is examined drawing on sources in the philosophy and history of science. We maintain that the success of scientific research depends critically on its concurrency and pluralism. A variant of the language Ether is developed that embodies notions of concurrency necessary to emulate some of the problem solving behavior of scientific communities. Capabilities of scientific communities are discussed in parallel with simplified models of these capabilities in this language.
On Agent-Mediated Electronic Commerce This paper surveys and analyzes the state of the art of agent-mediated electronic commerce (e-commerce), concentrating particularly on the business-to-consumer (B2C) and business-to-business (B2B) aspects. From the consumer buying behavior perspective, agents are being used in the following activities: need identification, product brokering, buyer coalition formation, merchant brokering, and negotiation. The roles of agents in B2B e-commerce are discussed through the business-to-business transaction model that identifies agents as being employed in partnership formation, brokering, and negotiation. Having identified the roles for agents in B2C and B2B e-commerce, some of the key underpinning technologies of this vision are highlighted. Finally, we conclude by discussing the future directions and potential impediments to the wide-scale adoption of agent-mediated e-commerce.
Janus - A Paradigm For Active Decision Support Active decision support is concerned with developing advanced forms of decision support where the support tools are capable of actively participating in the decision making process, and decisions are made by fruitful collaboration between the human and the machine. It is currently an active and leading area of research within the field of decision support systems. The objective of this paper is to share the details of our research in this area. We present our overall research strategy for exploring advanced forms of decision support and discuss in detail our research prototype called JANUS that implements our ideas. We establish the contributions of our work and discuss our experiences and plans for future.
Multimedia-based interactive advising technology for online consumer decision support Multimedia technologies (such as Flash and QuickTime) have been widely used in online product presentation and promotion to portray products in a dynamic way. The continuous visual stimuli and associated sound effects provide vivid and interesting product presentations; hence, they engage online customers in examining products. Meanwhile, recent research has indicated that online shoppers want detailed and relevant product information and explanations [2]. A promising approach is to embed rich product information and explanations into multimedia-enhanced product demonstrations. This approach is called Multimedia-based Product Annotation (MPA), a product presentation in which customers can retrieve embedded product information in a multimedia context.
An algorithm for pronominal anaphora resolution This paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors (reflexives and reciprocals). The algorithm applies to the syntactic representations generated by McCord's Slot Grammar parser and relies on salience measures derived from syntactic structure and a simple dynamic model of attentional state. Like the parser, the algorithm is implemented in Prolog. The authors have tested it extensively on computer manual texts and conducted a blind test on manual text containing 360 pronoun occurrences. The algorithm successfully identifies the antecedent of the pronoun for 86% of these pronoun occurrences. The relative contributions of the algorithm's components to its overall success rate in this blind test are examined. Experiments were conducted with an enhancement of the algorithm that contributes statistically modelled information concerning semantic and real-world relations to the algorithm's decision procedure. Interestingly, this enhancement only marginally improves the algorithm's performance (by 2%). The algorithm is compared with other approaches to anaphora resolution that have been proposed in the literature. In particular, the search procedure of Hobbs' algorithm was implemented in the Slot Grammar framework and applied to the sentences in teh blind test set. The authors' algorithm achieves a higher rate of success (4%) than Hobbs' algorithm. The relation of the algorithm to the centering approach is discussed, as well as to models of anaphora resolution that invoke a variety of informational factors in ranking antecedent candidates.
The concept of a linguistic variable and its application to approximate reasoning-III By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. I:or example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e., young, not young, very young, quite young, old, not very oldand not very young, etc., rather than 20, 21, 22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (&?, T(z), U, G,M) in which &? is the name of the variable; T(s) is the term-set of2 , that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(z); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U The meaning of a linguistic value X is characterized by a compatibility function, c : l/ + (0, I), which associates with each u in U its compati- bility with X. Thus, the COItIpdtibiiity of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compdtibihties of the so- called primary terms in a composite linguistic value-e.g.,.young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectivesand and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The
Fuzzy logic systems for engineering: a tutorial A fuzzy logic system (FLS) is unique in that it is able to simultaneously handle numerical data and linguistic knowledge. It is a nonlinear mapping of an input data (feature) vector into a scalar output, i.e., it maps numbers into numbers. Fuzzy set theory and fuzzy logic establish the specifics of the nonlinear mapping. This tutorial paper provides a guided tour through those aspects of fuzzy sets and fuzzy logic that are necessary to synthesize an FLS. It does this by starting with crisp set theory and dual logic and demonstrating how both can be extended to their fuzzy counterparts. Because engineering systems are, for the most part, causal, we impose causality as a constraint on the development of the FLS. After synthesizing a FLS, we demonstrate that it can be expressed mathematically as a linear combination of fuzzy basis functions, and is a nonlinear universal function approximator, a property that it shares with feedforward neural networks. The fuzzy basis function expansion is very powerful because its basis functions can be derived from either numerical data or linguistic knowledge, both of which can be cast into the forms of IF-THEN rules
Convergence Rates of Best N-term Galerkin Approximations for a Class of Elliptic sPDEs Deterministic Galerkin approximations of a class of second order elliptic PDEs with random coefficients on a bounded domain D⊂ℝd are introduced and their convergence rates are estimated. The approximations are based on expansions of the random diffusion coefficients in L 2(D)-orthogonal bases, and on viewing the coefficients of these expansions as random parameters y=y(ω)=(y i (ω)). This yields an equivalent parametric deterministic PDE whose solution u(x,y) is a function of both the space variable x∈D and the in general countably many parameters y. We establish new regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to $V=H^{1}_{0}(D)$. These results lead to analytic estimates on the V norms of the coefficients (which are functions of x) in a so-called “generalized polynomial chaos” (gpc) expansion of u. Convergence estimates of approximations of u by best N-term truncated V valued polynomials in the variable y∈U are established. These estimates are of the form N −r , where the rate of convergence r depends only on the decay of the random input expansion. It is shown that r exceeds the benchmark rate 1/2 afforded by Monte Carlo simulations with N “samples” (i.e., deterministic solves) under mild smoothness conditions on the random diffusion coefficients. A class of fully discrete approximations is obtained by Galerkin approximation from a hierarchic family $\{V_{l}\}_{l=0}^{\infty}\subset V$of finite element spaces in D of the coefficients in the N-term truncated gpc expansions of u(x,y). In contrast to previous works, the level l of spatial resolution is adapted to the gpc coefficient. New regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to a smoothness space W⊂V are established leading to analytic estimates on the W norms of the gpc coefficients and on their space discretization error. The space W coincides with $H^{2}(D)\cap H^{1}_{0}(D)$in the case where D is a smooth or convex domain. Our analysis shows that in realistic settings a convergence rate $N_{\mathrm{dof}}^{-s}$in terms of the total number of degrees of freedom N dof can be obtained. Here the rate s is determined by both the best N-term approximation rate r and the approximation order of the space discretization in D.
Coding Algorithms for 3DTV—A Survey Research efforts on 3DTV technology have been strengthened worldwide recently, covering the whole media processing chain from capture to display. Different 3DTV systems rely on different 3D scene representations that integrate various types of data. Efficient coding of these data is crucial for the success of 3DTV. Compression of pixel-type data including stereo video, multiview video, and associated depth or disparity maps extends available principles of classical video coding. Powerful algorithms and open international standards for multiview video coding and coding of video plus depth data are available and under development, which will provide the basis for introduction of various 3DTV systems and services in the near future. Compression of 3D mesh models has also reached a high level of maturity. For static geometry, a variety of powerful algorithms are available to efficiently compress vertices and connectivity. Compression of dynamic 3D geometry is currently a more active field of research. Temporal prediction is an important mechanism to remove redundancy from animated 3D mesh sequences. Error resilience is important for transmission of data over error prone channels, and multiple description coding (MDC) is a suitable way to protect data. MDC of still images and 2D video has already been widely studied, whereas multiview video and 3D meshes have been addressed only recently. Intellectual property protection of 3D data by watermarking is a pioneering research area as well. The 3D watermarking methods in the literature are classified into three groups, considering the dimensions of the main components of scene representations and the resulting components after applying the algorithm. In general, 3DTV coding technology is maturating. Systems and services may enter the market in the near future. However, the research area is relatively young compared to coding of other types of media. Therefore, there is still a lot of room for improvement and new development o- f algorithms.
Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing The replica method is a nonrigorous but well-known technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an $n$-dimensional vector “decouples” as $n$ scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdú. The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, least absolute shrinkage and selection operator (LASSO), linear estimation with thresholding, and zero norm-regularized estimation. In the case of LASSO estimation, the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation, it reduces to a hard threshold. Among other benefits, the replica method provides a computationally tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability.
Preferences and their application in evolutionary multiobjective optimization The paper describes a new preference method and its use in multiobjective optimization. These preferences are developed with a goal to reduce the cognitive overload associated with the relative importance of a certain criterion within a multiobjective design environment involving large numbers of objectives. Their successful integration with several genetic-algorithm-based design search and optimi...
New Type-2 Rule Ranking Indices for Designing Parsimonious Interval Type-2 Fuzzy Logic Systems In this paper, we propose two novel indices for type-2 fuzzy rule ranking to identify the most influential fuzzy rules in designing type-2 fuzzy logic systems, and name them as R-values and c-values of fuzzy rules separately. The R-values of type-2 fuzzy rules are obtained by applying QR decomposition in which there is no need to estimate a rank as required in the SVD-QR with column pivoting algorithm. The c-values of type-2 fuzzy rules are suggested to rank rules based on the effects of rule consequents. Experimental results on a signal recovery problem have shown that by using the proposed indices the most influential type-2 fuzzy rules can be effectively selected to construct parsimonious type-2 fuzzy models while the system performances are kept at a satisfied level.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
Breaking the Curse of Dimensionality, Or How to Use SVD in Many Dimensions For $d$-dimensional tensors with possibly large $d3$, an hierarchical data structure, called the Tree-Tucker format, is presented as an alternative to the canonical decomposition. It has asymptotically the same (and often even smaller) number of representation parameters and viable stability properties. The approach involves a recursive construction described by a tree with the leafs corresponding to the Tucker decompositions of three-dimensional tensors, and is based on a sequence of SVDs for the recursively obtained unfolding matrices and on the auxiliary dimensions added to the initial “spatial” dimensions. It is shown how this format can be applied to the problem of multidimensional convolution. Convincing numerical examples are given.
Methodology for analysis of TSV stress induced transistor variation and circuit performance As continued scaling becomes increasingly difficult, 3D integration with through silicon vias (TSVs) has emerged as a viable solution to achieve higher bandwidth and power efficiency. Mechanical stress induced by thermal mismatch between TSVs and the silicon bulk arising during wafer fabrication and 3D integration, is a key constraint. In this work, we propose a complete flow to characterize the influence of TSV stress on transistor and circuit performance. First, we analyze the thermal stress contour near the silicon surface with single and multiple TSVs through both finite element analysis (FEA) and linear superposition methods. Then, the biaxial stress is converted to mobility and threshold voltage variations depending on transistor type and geometric relation between TSVs and transistors. Next, we propose an efficient algorithm to calculate circuit variation corresponding to TSV stress based on a grid partition approach. Finally, we discuss a TSV pattern optimization strategy, and employ a series of 17-stage ring oscillators using 40 nm CMOS technology as a test case for the proposed approach.
Efficient model reduction of parametrized systems by matrix discrete empirical interpolation In this work, we apply a Matrix version of the so-called Discrete Empirical Interpolation (MDEIM) for the efficient reduction of nonaffine parametrized systems arising from the discretization of linear partial differential equations. Dealing with affinely parametrized operators is crucial in order to enhance the online solution of reduced-order models (ROMs). However, in many cases such an affine decomposition is not readily available, and must be recovered through (often) intrusive procedures, such as the empirical interpolation method (EIM) and its discrete variant DEIM. In this paper we show that MDEIM represents a very efficient approach to deal with complex physical and geometrical parametrizations in a non-intrusive, efficient and purely algebraic way. We propose different strategies to combine MDEIM with a state approximation resulting either from a reduced basis greedy approach or Proper Orthogonal Decomposition. A posteriori error estimates accounting for the MDEIM error are also developed in the case of parametrized elliptic and parabolic equations. Finally, the capability of MDEIM to generate accurate and efficient ROMs is demonstrated on the solution of two computationally-intensive classes of problems occurring in engineering contexts, namely PDE-constrained shape optimization and parametrized coupled problems.
Quantics-TT Collocation Approximation of Parameter-Dependent and Stochastic Elliptic PDEs.
Application of hierarchical matrices for computing the Karhunen–Loève expansion Realistic mathematical models of physical processes contain uncertainties. These models are often described by stochastic differential equations (SDEs) or stochastic partial differential equations (SPDEs) with multiplicative noise. The uncertainties in the right-hand side or the coefficients are represented as random fields. To solve a given SPDE numerically one has to discretise the deterministic operator as well as the stochastic fields. The total dimension of the SPDE is the product of the dimensions of the deterministic part and the stochastic part. To approximate random fields with as few random variables as possible, but still retaining the essential information, the Karhunen–Loève expansion (KLE) becomes important. The KLE of a random field requires the solution of a large eigenvalue problem. Usually it is solved by a Krylov subspace method with a sparse matrix approximation. We demonstrate the use of sparse hierarchical matrix techniques for this. A log-linear computational cost of the matrix-vector product and a log-linear storage requirement yield an efficient and fast discretisation of the random fields presented.
Tensor-Train Decomposition A simple nonrecursive form of the tensor decomposition in $d$ dimensions is presented. It does not inherently suffer from the curse of dimensionality, it has asymptotically the same number of parameters as the canonical decomposition, but it is stable and its computation is based on low-rank approximation of auxiliary unfolding matrices. The new form gives a clear and convenient way to implement all basic operations efficiently. A fast rounding procedure is presented, as well as basic linear algebra operations. Examples showing the benefits of the decomposition are given, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator.
Tensor rank is NP-complete We prove that computing the rank of a three-dimensional tensor over any finite field is NP-complete. Over the rational numbers the problem is NP-hard.
Probabilistic Power Flow Computation via Low-Rank and Sparse Tensor Recovery This paper presents a tensor-recovery method to solve probabilistic power flow problems. Our approach generates a high-dimensional and sparse generalized polynomial-chaos expansion that provides useful statistical information. The result can also speed up other essential routines in power systems (e.g., stochastic planning, operations and controls). Instead of simulating a power flow equation at all quadrature points, our approach only simulates an extremely small subset of samples. We suggest a model to exploit the underlying low-rank and sparse structure of high-dimensional simulation data arrays, making our technique applicable to power systems with many random parameters. We also present a numerical method to solve the resulting nonlinear optimization problem. Our algorithm is implemented in MATLAB and is verified by several benchmarks in MATPOWER $5.1$. Accurate results are obtained for power systems with up to $50$ independent random parameters, with a speedup factor up to $9\times 10^{20}$.
Tensor completion for estimating missing values in visual data. In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependent relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC an- HaLRTC the former is more efficient to obtain a low accuracy solution and the latter is preferred if a high-accuracy solution is desired.
Statistical blockade: very fast statistical simulation and modeling of rare circuit events and its application to memory design Circuit reliability under random parametric variation is an area of growing concern. For highly replicated circuits, e.g., static random access memories (SRAMs), a rare statistical event for one circuit may induce a not-so-rare system failure. Existing techniques perform poorly when tasked to generate both efficient sampling and sound statistics for these rare events. Statistical blockade is a novel Monte Carlo technique that allows us to efficiently filter--to block--unwanted samples that are insufficiently rare in the tail distributions we seek. The method synthesizes ideas from data mining and extreme value theory and, for the challenging application of SRAM yield analysis, shows speedups of 10-100 times over standard Monte Carlo.
Bayesian Inference and Optimal Design for the Sparse Linear Model The linear model with sparsity-favouring prior on the coefficients has important applications in many different domains. In machine learning, most methods to date search for maximum a posteriori sparse solutions and neglect to represent posterior uncertainties. In this paper, we address problems of Bayesian optimal design (or experiment planning), for which accurate estimates of uncertainty are essential. To this end, we employ expectation propagation approximate inference for the linear model with Laplace prior, giving new insight into numerical stability properties and proposing a robust algorithm. We also show how to estimate model hyperparameters by empirical Bayesian maximisation of the marginal likelihood, and propose ideas in order to scale up the method to very large underdetermined problems. We demonstrate the versatility of our framework on the application of gene regulatory network identification from micro-array expression data, where both the Laplace prior and the active experimental design approach are shown to result in significant improvements. We also address the problem of sparse coding of natural images, and show how our framework can be used for compressive sensing tasks. Part of this work appeared in Seeger et al. (2007b). The gene network identification application appears in Steinke et al. (2007).
Sparse Event Detection In Wireless Sensor Networks Using Compressive Sensing Compressive sensing is a revolutionary idea proposed recently to achieve much lower sampling rate for sparse signals. For large wireless sensor networks, the events are relatively sparse compared with the number of sources. Because of deployment cost, the number of sensors is limited, and due to energy constraint, not all the sensors are turned on all the time. In this paper, the first contribution is to formulate the problem for sparse event detection in wireless sensor networks as a compressive sensing problem. The number of (wake-up) sensors can be greatly reduced to the similar level of the number of sparse events, which is much smaller than the total number of sources. Second, we suppose the event has the binary nature, and employ the Bayesian detection using this prior information. Finally, we analyze the performance of the compressive sensing algorithms under the Gaussian noise. From the simulation results, we show that the sampling rate can reduce to 25% without sacrificing performance. With further decreasing the sampling rate, the performance is gradually reduced until 10% of sampling rate. Our proposed detection algorithm has much better performance than the L-1-magic algorithm proposed in the literature.
Postsilicon Tuning of Standby Supply Voltage in SRAMs to Reduce Yield Losses Due to Parametric Data-Retention Failures Lowering the supply voltage of static random access memories (SRAMs) during standby modes is an effective technique to reduce their leakage power consumption. To maximize leakage reductions, it is desirable to reduce the supply voltage as much as possible. SRAM cells can retain their data down to a certain voltage, called the data-retention voltage (DRV). Due to intra-die variations in process parameters, the DRV of cells differ within a single memory die. Hence, the minimum applicable standby voltage to a memory die $(V_{\rm DDLmin})$ is determined by the maximum DRV among its constituent cells. On the other hand, inter-die variations result in a die-to-die variation of $V_{\rm DDLmin}$. Applying an identical standby voltage to all dies, regardless of their corresponding $V_{\rm DDLmin}$, can result in the failure of some dies, due to data-retention failures (DRFs), entailing yield losses. In this work, we first show that the yield losses can be significant if the standby voltage of SRAMs is reduced aggressively. Then, we propose a postsilicon standby voltage tuning scheme to avoid the yield losses due to DRFs, while reducing the leakage currents effectively. Simulation results in a 45-nm predictive technology show that tuning standby voltage of SRAMs can enhance data-retention yield by 10%–50%.
On the Rekeying Load in Group Key Distributions Using Cover-Free Families Key distributions based on cover-free families have been recently proposed for secure rekeying in group communication systems after multiple simultaneous user ejections. Existing literature has not quantified how difficult this rekeying operation might be. This study provides upper bounds on the number messages necessary to rekey a key distribution based on symmetric combinatorial designs after one or two simultaneous user ejections. Connections are made to results from finite geometry to show that these bounds are tight for certain key distributions. It is shown that in general determining the minimal number of messages necessary to rekey a group communication system based on a cover-free family is NP-hard.
1.023498
0.02107
0.02
0.01465
0.008983
0.006628
0.003821
0.000932
0.000278
0.000049
0.000003
0
0
0
Exponential Convergence of Gauss-Jacobi Quadratures for Singular Integrals over Simplices in Arbitrary Dimension. Galerkin discretizations of integral operators in R-d require the evaluation of integrals integral(S(1)) integral(S(2)) f(x, y) dydx, where S-(1), S-(2) are d-dimensional simplices and f has a singularity at x = y. In [A. Chernov, T. von Petersdorff, and C. Schwab, M2AN Math. Model. Numer. Anal., 45 (2011), pp. 387-422] we constructed a family of hp-quadrature rules Q(N) with N function evaluations for a class of integrands f allowing for algebraic singularities at x - y, possibly nonintegrable with respect to either dx or dy (hypersingular kernels) and Gevrey-delta smooth for x not equal y. This is satisfied for kernels from broad classes of pseudodifferential operators. We proved that Q(N) achieves the exponential convergence rate O(exp(-gamma N gamma)) with the exponent gamma = 1/(2d delta + 1). In this paper we consider a special singularity parallel to x -y parallel to(alpha) with real alpha which appears frequently in appplication and prove that an improved convergence rate with gamma = 1/(2d delta) is achieved if a certain one-dimensional Gauss-Jacobi quadrature rule is used in the (univariate) "singular coordinate." We also analyze approximation by tensor Gauss-Jacobi quadratures in the "regular coordinates." We illustrate the performance of the new Gauss-Jacobi rules on several numerical examples and compare it to the hp-quadratures from [A. Chernov, T. von Petersdorff, and C. Schwab, M2AN Math. Model. Numer. Anal., 45 (2011), pp. 387-422].
Numerical quadrature for high-dimensional singular integrals over parallelotopes We introduce and analyze a family of algorithms for an efficient numerical approximation of integrals of the form I=@!"C"^"("^"1"^")@!"C"^"("^"2"^")F(x,y,y-x)dydx where C^(^1^), C^(^2^) are d-dimensional parallelotopes (i.e. affine images of d-hypercubes) and F has a singularity at y-x=0. Such integrals appear in Galerkin discretization of integral operators in R^d. We construct a family of quadrature rules Q"N with N function evaluations for a class of integrands F which may have algebraic singularities at y-x=0 and are Gevrey-@d regular for y-x0. The main tool is an explicit regularizing coordinate transformation, simultaneously simplifying the singular support and the domain of integration. For the full tensor product variant of the suggested quadrature family we prove that Q"N achieves the exponential convergence rate O(exp(-rN^@c)) with the exponent @c=1/(2d@d+1). In the special case of a singularity of the form @?y-x@?^@a with real @a we prove that the improved convergence rate of @c=1/(2d@d) is achieved if a certain modified one-dimensional Gauss-Jacobi quadrature rule is used in the singular direction. We give numerical results for various types of the quadrature rules, in particular based on tensor product rules, standard (Smolyak), optimized and adaptive sparse grid quadratures and Sobol' sequences.
Fast calculation of coefficients in the Smolyak algorithm For many numerical problems involving smooth multivariate functions on d-cubes, the so-called Smolyak algorithm (or Boolean method, sparse grid method, etc.) has proved to be very useful. The final form of the algorithm (see equation (12) below) requires functional evaluation as well as the computation of coefficients. The latter can be done in different ways that may have considerable influence on the total cost of the algorithm. In this paper, we try to diminish this influence as far as possible. For example, we present an algorithm for the integration problem that reduces the time for the calculation and exposition of the coefficients in such a way that for increasing dimension, this time is small compared to dn, where n is the number of involved function values.
Dimension–Adaptive Tensor–Product Quadrature We consider the numerical integration of multivariate functions defined over the unit hypercube. Here, we especially address the high–dimensional case, where in general the curse of dimension is encountered. Due to the concentration of measure phenomenon, such functions can often be well approximated by sums of lower–dimensional terms. The problem, however, is to find a good expansion given little knowledge of the integrand itself. The dimension–adaptive quadrature method which is developed and presented in this paper aims to find such an expansion automatically. It is based on the sparse grid method which has been shown to give good results for low- and moderate–dimensional problems. The dimension–adaptive quadrature method tries to find important dimensions and adaptively refines in this respect guided by suitable error estimators. This leads to an approach which is based on generalized sparse grid index sets. We propose efficient data structures for the storage and traversal of the index sets and discuss an efficient implementation of the algorithm. The performance of the method is illustrated by several numerical examples from computational physics and finance where dimension reduction is obtained from the Brownian bridge discretization of the underlying stochastic process.
Neural networks and approximation theory
Convergence Properties Of Gaussian Quadrature-Formulas
Anaphora for everyone: pronominal anaphora resoluation without a parser We present an algorithm for anaphora resolution which is a modified and extended version of that developed by (Lappin and Leass, 1994). In contrast to that work, our algorithm does not require in-depth, full, syntactic parsing of text. Instead, with minimal compromise in output quality, the modifications enable the resolution process to work from the output of a part of speech tagger, enriched only with annotations of grammatical function of lexical items in the input text stream. Evaluation of the results of our implementation demonstrates that accurate anaphora resolution can be realized within natural language processing frameworks which do not---or cannot--- employ robust and reliable parsing components.
Universally composable security: a new paradigm for cryptographic protocols We propose a novel paradigm for defining security of cryptographic protocols, called universally composable security. The salient property of universally composable definitions of security is that they guarantee security even when a secure protocol is composed of an arbitrary set of protocols, or more generally when the protocol is used as a component of an arbitrary system. This is an essential property for maintaining security of cryptographic protocols in complex and unpredictable environments such as the Internet. In particular, universally composable definitions guarantee security even when an unbounded number of protocol instances are executed concurrently in an adversarially controlled manner, they guarantee non-malleability with respect to arbitrary protocols, and more. We show how to formulate universally composable definitions of security for practically any cryptographic task. Furthermore, we demonstrate that practically any such definition can be realized using known techniques, as long as only a minority of the participants are corrupted. We then proceed to formulate universally composable definitions of a wide array of cryptographic tasks, including authenticated and secure communication, key-exchange, public-key encryption, signature, commitment, oblivious transfer, zero knowledge and more. We also make initial steps towards studying the realizability of the proposed definitions in various settings.
Computing with words in decision making: foundations, trends and prospects Computing with Words (CW) methodology has been used in several different environments to narrow the differences between human reasoning and computing. As Decision Making is a typical human mental process, it seems natural to apply the CW methodology in order to create and enrich decision models in which the information that is provided and manipulated has a qualitative nature. In this paper we make a review of the developments of CW in decision making. We begin with an overview of the CW methodology and we explore different linguistic computational models that have been applied to the decision making field. Then we present an historical perspective of CW in decision making by examining the pioneer papers in the field along with its most recent applications. Finally, some current trends, open questions and prospects in the topic are pointed out.
Correlation-preserved non-Gaussian statistical timing analysis with quadratic timing model Recent study shows that the existing first order canonical timing model is not sufficient to represent the dependency of the gate delay on the variation sources when processing and operational variations become more and more significant. Due to the nonlinearity of the mapping from variation sources to the gate/wire delay, the distribution of the delay is no longer Gaussian even if the variation sources are normally distributed. A novel quadratic timing model is proposed to capture the non-linearity of the dependency of gate/wire delays and arrival times on the variation sources. Systematic methodology is also developed to evaluate the correlation and distribution of the quadratic timing model. Based on these, a novel statistical timing analysis algorithm is propose which retains the complete correlation information during timing analysis and has the same computation complexity as the algorithm based on the canonical timing model. Tested on the ISCAS circuits, the proposed algorithm shows 10 × accuracy improvement over the existing first order algorithm while no significant extra runtime is needed.
The bittorrent p2p file-sharing system: measurements and analysis Of the many P2P file-sharing prototypes in existence, BitTorrent is one of the few that has managed to attract millions of users. BitTorrent relies on other (global) components for file search, employs a moderator system to ensure the integrity of file data, and uses a bartering technique for downloading in order to prevent users from freeriding. In this paper we present a measurement study of BitTorrent in which we focus on four issues, viz. availability, integrity, flashcrowd handling, and download performance. The purpose of this paper is to aid in the understanding of a real P2P system that apparently has the right mechanisms to attract a large user community, to provide measurement data that may be useful in modeling P2P systems, and to identify design issues in such systems.
Risk-based access control systems built on fuzzy inferences Fuzzy inference is a promising approach to implement risk-based access control systems. However, its application to access control raises some novel problems that have not been yet investigated. First, because there are many different fuzzy operations, one must choose the fuzzy operations that best address security requirements. Second, risk-based access control, though it improves information flow and better addresses requirements from critical organizations, may result in damages by malicious users before mitigating steps are taken. Third, the scalability of a fuzzy inference-based access control system is questionable. The time required by a fuzzy inference engine to estimate risks may be quite high especially when there are tens of parameters and hundreds of fuzzy rules. However, an access control system may need to serve hundreds or thousands of users. In this paper, we investigate these issues and present our solutions or answers to them.
Automatic discovery of algorithms for multi-agent systems Automatic algorithm generation for large-scale distributed systems is one of the holy grails of artificial intelligence and agent-based modeling. It has direct applicability in future engineered (embedded) systems, such as mesh networks of sensors and actuators where there is a high need to harness their capabilities via algorithms that have good scalability characteristics. NetLogo has been extensively used as a teaching and research tool by computer scientists, for example for exploring distributed algorithms. Inventing such an algorithm usually involves a tedious reasoning process for each individual idea. In this paper, we report preliminary results in our effort to push the boundary of the discovery process even further, by replacing the classical approach with a guided search strategy that makes use of genetic programming targeting the NetLogo simulator. The effort moves from a manual model implementation to an automated discovery process. The only activity that is required is the implementation of primitives and the configuration of the tool-chain. In this paper, we explore the capabilities of our framework by re-inventing five well-known distributed algorithms.
The laws of large numbers for fuzzy random variables The new attempt of weak and strong law of large numbers for fuzzy random variables is discussed in this paper by proposing the convergence in probability and convergence with probability one for fuzzy random variables. We first consider the limit properties of fuzzy numbers by invoking the Hausdorff metric, and then we extend it to the convergence in probability and convergence with probability one for fuzzy random variables. We provide the notion of weak and strong convergence in probability and weak and strong convergence with probability one for fuzzy random variables. Finally we come up with the weak and strong law of large numbers for fuzzy random variables in weak and strong sense. (C) 2000 Elsevier Science B.V. All rights reserved.
1.128889
0.066667
0.032
0.004324
0.000165
0.000054
0
0
0
0
0
0
0
0
Asymptotic achievability of the Cramér-Rao bound for noisy compressive sampling We consider a model of the form y = Ax + n, where x ε CM is sparse with at most L nonzero coefficients in unknown locations, y ε CN is the observation vector, A ε CN×M is the measurement matrix and n ε CN is the Gaussian noise. We develop a Cramér-Rao bound on the mean squared estimation error of the nonzero elements of x, corresponding to the genie-aided estimator (GAE) which is provided with the locations of the nonzero elements of x. Intuitively, the mean squared estimation error of any estimator without the knowledge of the locations of the nonzero elements of x is no less than that of the GAE. Assuming that L/N is fixed, we establish the existence of an estimator that asymptotically achieves the Cramér-Rao bound without any knowledge of the locations of the nonzero elements of x as N → ∞, for A a random Gaussian matrix whose elements are drawn i.i.d. according to N (0, 1).
Online Sparse System Identification And Signal Reconstruction Using Projections Onto Weighted L(1) Balls This paper presents a novel projection-based adaptive algorithm for sparse signal and system identification. The sequentially observed data are used to generate an equivalent sequence of closed convex sets, namely hyperslabs. Each hyperslab is the geometric equivalent of a cost criterion, that quantifies "data mismatch." Sparsity is imposed by the introduction of appropriately designed weighted l(1) balls and the related projection operator is also derived. The algorithm develops around projections onto the sequence of the generated hyperslabs as well as the weighted l(1) balls. The resulting scheme exhibits linear dependence, with respect to the unknown system's order, on the number of multiplications/additions and an O(L log(2) L) dependence on sorting operations, where L is the length of the system/signal to be estimated. Numerical results are also given to validate the performance of the proposed method against the Least-Absolute Shrinkage and Selection Operator (LASSO) algorithm and two very recently developed adaptive sparse schemes that fuse arguments from the LMS/RLS adaptation mechanisms with those imposed by the lasso rational.
Circulant and Toeplitz matrices in compressed sensing Abstract—Compressed,sensing seeks to recover a sparse vector from a small number,of linear and,non-adaptive measurements.,While most work,so far focuses on Gaussian or Bernoulli random,measurements,we investigate the use of partial random,circulant and Toeplitz matrices in connection,with recovery,by ‘1-minization. In contrast to recent work in this direction we,allow the use of an arbitrary,subset of rows,of a circulant and Toeplitz matrix. Our recovery,result predicts that the necessary,number,of measurements,to ensure sparse reconstruction,by ‘1-minimization with random,partial circulant or Toeplitz matrices scales linearly in the sparsity up to a log-factor in the ambient,dimension. This represents a significant improvement,over previous recovery results for such matrices. As a main,tool for the proofs we use a new,version of the non-commutative,Khintchine inequality.
Sharp thresholds for high-dimensional and noisy sparsity recovery using l1-constrained quadratic programming (Lasso) The problem of consistently estimating the sparsity pattern of a vector β* ∈ RP based on observations contaminated by noise arises in various contexts, including signal denoising, sparse approximation, compressed sensing, and model selection. We analyze the behavior of l1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern. Our main result is to establish precise conditions on the problern dimension p, the number k of nonzero elements in β*, and the number of observations n that are necessary and sufficient for sparsity pattern recovery using the Lasso. We first analyze the case of observations made using deterministic design matrices and sub-Gaussian additive noise, and provide sufficient conditions for support recovery and l∞-error bounds, as well as results showing the necessity of incoherence and bounds on the minimum value. We then turn to the case of random designs, in which each row of the design is drawn from a N(0, Σ) ensemble. For a broad class of Gaussian ensembles satisfying mutual incoherence conditions, we compute explicit values of thresholds 0 l(Σ) ≤ θu(Σ) 0, if n 2(θu + δ)k log(p - k), then the Lasso succeeds in recovering the sparsity pattern with probability converging to one for large problems, whereas for n l - δ)k log(p - k), then the probability of successful recovery converges to zero. For the special case of the uniform Gaussian ensemble (Σ = I p×p), we show that θl = θu = 1, so that the precise threshold n = 2k log (p - k) is exactly determined.
Subspace pursuit for compressive sensing signal reconstruction We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques when applied to very sparse signals, and reconstruction accuracy of the same order as that of linear programming (LP) optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean-squared error of the reconstruction is upper-bounded by constant multiples of the measurement and signal perturbation energies.
Sharp thresholds for high-dimensional and noisy recovery of sparsity The problem of consistently estimating the sparsity pattern of a vector β� 2 Rp based on observa- tions contaminated by noise arises in various contexts, including subset selection in regression, structure estima- tion in graphical models, sparse approximation, and sig- nal denoising. Unfortunately, the natural optimization- theoretic formulation involves ℓ0 constraints, which leads to NP-hard problems in general; this intractability mo- tivates the use of relaxations based on ℓ1 constraints. We analyze the behavior of ℓ1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern. Our main result is to establish a sharp relation between the problem di- mension p, the number s of non-zero elements in β�, and the number of observations n that are required for reliable recovery. For a broad class of Gaussian ensembles satisfying mutual incoherence conditions, we establish existence and compute explicit values of thresh- olds θℓ and θu with the following properties: for any ν > 0, if n > 2 s(θu + ν) log(p s) + s + 1, then the Lasso succeeds in recovering the sparsity pattern with probability converging to one for large problems, whereas for n < 2 s(θℓ ν) log(p s) + s + 1, then the probability of successful recovery converges to zero. For the special case of the uniform Gaussian ensemble, we show that θℓ = θu = 1, so that the threshold is sharp and exactly determined.
On sparse representations in arbitrary redundant bases The purpose of this contribution is to generalize some recent results on sparse representations of signals in redundant bases. The question that is considered is the following: given a matrix A of dimension (n,m) with mn and a vector b=Ax, find a sufficient condition for b to have a unique sparsest representation x as a linear combination of columns of A. Answers to this question are known when A is the concatenation of two unitary matrices and either an extensive combinatorial search is performed or a linear program is solved. We consider arbitrary A matrices and give a sufficient condition for the unique sparsest solution to be the unique solution to both a linear program or a parametrized quadratic program. The proof is elementary and the possibility of using a quadratic program opens perspectives to the case where b=Ax+e with e a vector of noise or modeling errors.
Multiple description coding: compression meets the network This article focuses on the compressed representations of pictures. The representation does not affect how many bits get from the Web server to the laptop, but it determines the usefulness of the bits that arrive. Many different representations are possible, and there is more involved in their choice than merely selecting a compression ratio. The techniques presented represent a single information...
Near-Optimal Sparse Recovery in the L1 Norm We consider the *approximate sparse recovery problem*, where the goal is to (approximately) recover a high-dimensional vector x from Rn from its lower-dimensional *sketch* Ax from Rm.Specifically, we focus on the sparse recovery problem in the L1 norm: for a parameter k, given the sketch Ax, compute an approximation x' of x such that the L1 approximation error | |x-x'| | is close to minimum of | |x-x*| | over all vectors x* with at most k terms. The sparse recovery problem has been subject to extensive research over the last few years.Many solutions to this problem have been discovered, achieving different trade-offs between various attributes, such as the sketch length, encoding and recovery times.In this paper we provide a sparse recovery scheme which achieves close to optimal performance on virtually all attributes. In particular, this is the first recovery scheme that guarantees k log(n/k) sketch length, and near-linear n log (n/k) recovery time *simultaneously*. It also features low encoding and update times, and is noise-resilient.
Hybrid Gauss-Trapezoidal Quadrature Rules A new class of quadrature rules for the integration of both regular and singular functions is constructed and analyzed. For each rule the quadrature weights are positive and the class includes rules of arbitrarily high-order convergence. The quadratures result from alterations to the trapezoidal rule, in which a small number of nodes and weights at the ends of the integration interval are replaced. The new nodes and weights are determined so that the asymptotic expansion of the resulting rule, provided by a generalization of the Euler--Maclaurin summation formula, has a prescribed number of vanishing terms. The superior performance of the rules is demonstrated with numerical examples and application to several problems is discussed.
Combining Statistical Learning with a Knowledge-Based Approach - A Case Study in Intensive Care Monitoring The paper describes a case study in combiningdifferent methods for acquiring medicalknowledge. Given a huge amount of noisy,high dimensional numerical time series datadescribing patients in intensive care, the supportvector machine is used to learn whenand how to change the dose of which drug.Given medical knowledge about and expertisein clinical decision making, a first-orderlogic knowledge base about effects of therapeuticalinterventions has been built. As apreprocessing ...
Reduction about approximation spaces of covering generalized rough sets The introduction of covering generalized rough sets has made a substantial contribution to the traditional theory of rough sets. The notion of attribute reduction can be regarded as one of the strongest and most significant results in rough sets. However, the efforts made on attribute reduction of covering generalized rough sets are far from sufficient. In this work, covering reduction is examined and discussed. We initially construct a new reduction theory by redefining the approximation spaces and the reducts of covering generalized rough sets. This theory is applicable to all types of covering generalized rough sets, and generalizes some existing reduction theories. Moreover, the currently insufficient reducts of covering generalized rough sets are improved by the new reduction. We then investigate in detail the procedures to get reducts of a covering. The reduction of a covering also provides a technique for data reduction in data mining.
A Particle-Partition of Unity Method--Part III: A Multilevel Solver In this sequel to part I [SIAM J. Sci. Comput., 22 (2000), pp. 853--890] and part II [SIAM J. Sci. Comput., 23 (2002), pp. 1655--1682] we focus on the efficient solution of the linear block-systems arising from a Galerkin discretization of an elliptic partial differential equation of second order with the partition of unity method (PUM). We present a cheap multilevel solver for partition of unity (PU) discretizations of any order. The shape functions of a PUM are products of piecewise rational PU functions $\varphi_i$ with $\supp(\varphi_i)=\omega_i$ and higher order local approximation functions $\psi_i^n$ (usually a local polynomial of degree $\leq p_i$). Furthermore, they are noninterpolatory. In a multilevel approach we have to cope with not only noninterpolatory basis functions but also with a sequence of nonnested spaces due to the meshfree construction. Hence, injection or interpolatory interlevel transfer operators are not available for our multilevel PUM. Therefore, the remaining natural choice for the prolongation operators are L2-projections. Here, we exploit the PUM construction of the function spaces and a hierarchical construction of the PU itself to localize the corresponding projection problem. This significantly reduces the computational costs associated with the setup and the application of the interlevel transfer operators. The second main ingredient of our multilevel solver is the use of a block-smoother to treat the local approximation functions $\psi_i^n$ for all $n$ simultaneously. The results of our numerical experiments in two and three dimensions show that the convergence rate of the proposed multilevel solver is independent of the number of patches $\card(\{\omega_i\})$. The convergence rate is slightly dependent on the local approximation orders pi.
Split Bregman iterative algorithm for sparse reconstruction of electrical impedance tomography In this paper, we present an evaluation of the use of split Bregman iterative algorithm for the L"1-norm regularized inverse problem of electrical impedance tomography. Simulations are performed to validate that our algorithm is competitive in terms of the imaging quality and computational speed in comparison with several state-of-the-art algorithms. Results also indicate that in contrast to the conventional L"2-norm regularization method and total variation (TV) regularization method, the L"1-norm regularization method can sharpen the edges and is more robust against data noises.
1.105913
0.055275
0.010051
0.005779
0.001563
0.000301
0.000063
0.000013
0.000002
0
0
0
0
0
The problem of linguistic approximation in clinical decision making This paper deals with the problem of linguistic approximation in a computerized system the context of medical decision making. The general problem and a few application-oriented solutions have been treated in the literature. After a review of the main approaches (best fit, successive approximations, piecewise decomposition, preference set, fuzzy chopping) some of the unresolved problems are pointed out. The case of deciding upon various diagnostic abnormalities suggested by the analysis of the electrocardiographic signal is then put forward. The linguistic approximation method used in this situation is finally described. Its main merit is its simple (i.e., easily understood) linguistic output, which uses labels whose meaning is rather well established among the users (i.e., the physicians).
An ordinal approach to computing with words and the preference-aversion model Computing with words (CWW) explores the brain's ability to handle and evaluate perceptions through language, i.e., by means of the linguistic representation of information and knowledge. On the other hand, standard preference structures examine decision problems through the decomposition of the preference predicate into the simpler situations of strict preference, indifference and incomparability. Hence, following the distinctive cognitive/neurological features for perceiving positive and negative stimuli in separate regions of the brain, we consider two separate and opposite poles of preference and aversion, and obtain an extended preference structure named the Preference-aversion (P-A) structure. In this way, examining the meaning of words under an ordinal scale and using CWW's methodology, we are able to formulate the P-A model under a simple and purely linguistic approach to decision making, obtaining a solution based on the preference and non-aversion order.
Generalised Interval-Valued Fuzzy Soft Set. We introduce the concept of generalised interval-valued fuzzy soft set and its operations and study some of their properties. We give applications of this theory in solving a decision making problem. We also introduce a similarity measure of two generalised interval-valued fuzzy soft sets and discuss its application in a medical diagnosis problem: fuzzy set; soft set; fuzzy soft set; generalised fuzzy soft set; generalised interval-valued fuzzy soft set; interval-valued fuzzy set; interval-valued fuzzy soft set.
A collective decision model involving vague concepts and linguistic expressions. In linguistic collective decision, the main objective is to select the best alternatives using linguistic evaluations provided by multiple experts. This paper presents a collective decision model, which is able to deal with complex linguistic evaluations. In this decision model, the linguistic evaluations are represented by linguistic expressions which are the logic formulas obtained by applying logic connectives to the set of basic linguistic labels. The vagueness of each linguistic expression is implicitly captured by a semantic similarity relation rather than a fuzzy set, since each linguistic expression determines a semantic similarity distribution on the set of basic linguistic labels. The basic idea of this collective decision model is to convert the semantic similarity distributions determined by linguistic expressions into probability distributions of the corresponding linguistic expressions. The main advantage of this proposed model is its capability to deal with complex linguistic evaluations and partial semantic overlapping among neighboring linguistic labels.
Solving an assignment–selection problem with verbal information and using genetic algorithms The assignment–selection problems deal with finding the best one-to-one match for each of the given number of “candidates” to “positions”. Different benefits or costs are involved in each match and the goal is to minimise the total expense. In this paper we propose the use of verbal information for representing the vague knowledge available. Doing it, natural linguistic labels allow the problem to be recognised as it is in real life. This paper is an attempt to supply a satisfactory solution to real assignment–selection problems with verbal information and using genetic algorithms, showing the application of this model to the staff selection problem.
Team Situation Awareness Using Web-Based Fuzzy Group Decision Support Systems Situation awareness (SA) is an important element to support responses and decision making to crisis problems. Decision making for a complex situation often needs a team to work cooperatively to get consensus awareness for the situation. Team SA is characterized including information sharing, opinion integration and consensus SA generation. In the meantime, various uncertainties are involved in team SA during information collection and awareness generation. Also, the collaboration between team members may be across distances and need web-based technology to facilitate. This paper presents a web-based fuzzy group decision support system (WFGDSS) and demonstrates how this system can provide a means of support for generating team SA in a distributed team work context with the ability of handling uncertain information.
Appropriateness measures: an uncertainty model for vague concepts We argue that in the decision making process required for selecting assertible vague descriptions of an object, it is practical that communicating agents adopt an epistemic stance. This corresponds to the assumption that there exists a set of conventions governing the appropriate use of labels, and about which an agent has only partial knowledge and hence significant uncertainty. It is then proposed that this uncertainty is quantified by a measure corresponding to an agent’s subjective belief that a vague concept label can be appropriately used to describe a particular object. We then apply Bayesian networks to investigate, in the case when knowledge of labelling conventions is represented by an ordering or ranking of the labels according to their appropriateness, how measure values allocated to basic labels can be used to directly infer the appropriateness measure of compound expressions.
A model based on linguistic 2-tuples for dealing with multigranularhierarchical linguistic contexts in multi-expert decision-making In those problems that deal with multiple sources of linguistic information we can find problems defined in contexts where the linguistic assessments are assessed in linguistic term sets with different granularity of uncertainty and/or semantics (multigranular linguistic contexts). Different approaches have been developed to manage this type of contexts, that unify the multigranular linguistic information in an unique linguistic term set for an easy management of the information. This normalization process can produce a loss of information and hence a lack of precision in the final results. In this paper, we shall present a type of multigranular linguistic contexts we shall call linguistic hierarchies term sets, such that, when we deal with multigranular linguistic information assessed in these structures we can unify the information assessed in them without loss of information. To do so, we shall use the 2-tuple linguistic representation model. Afterwards we shall develop a linguistic decision model dealing with multigranular linguistic contexts and apply it to a multi-expert decision-making problem
Facility location selection using fuzzy topsis under group decisions This work presents a fuzzy TOPSIS model under group decisions for solving the facility location selection problem, where the ratings of various alternative locations under different subjective attributes and the importance weights of all attributes are assessed in linguistic values represented by fuzzy numbers. The objective attributes are transformed into dimensionless indices to ensure compatibility with the linguistic ratings of the subjective attributes. Furthermore, the membership function of the aggregation of the ratings and weights for each alternative location versus each attribute can be developed by interval arithmetic and α -cuts of fuzzy numbers. The ranking method of the mean of the integral values is applied to help derive the ideal and negative-ideal fuzzy solutions to complete the proposed fuzzy TOPSIS model. Finally, a numerical example demonstrates the computational process of the proposed model.
Membership maximization prioritization methods for fuzzy analytic hierarchy process Fuzzy analytic hierarchy process (FAHP) has increasingly been applied in many areas. Extent analysis method is the popular tool for prioritization in FAHP, although significant technical errors are identified in this study. With addressing the errors, this research proposes membership maximization prioritization methods (MMPMs) using different membership functions as the novel solutions. As a lack of research about effectiveness measurement on the crisp/fuzzy prioritization methods, this study proposes membership fitness index to evaluate the effectiveness of the prioritization methods. Comparisons with the other popular fuzzy/crisp prioritization methods including modified fuzzy preference programming, Direct least squares, and Eigen value are conducted and analyses indicate that MMPMs lead to much more reliable result in view of membership fitness index. A numerical example demonstrates the usability of MMPMs for FAHP, and thus MMPMs can effectively be applied to various decision analysis applications.
An Approach To Interval-Valued R-Implications And Automorphisms The aim of this work is to introduce an approach for interval-valued R-implications, which satisfy some analogous properties of R-implications. We show that the best interval representation of an R-implication that is obtained from a left continuous t-norm coincides with the interval-valued R-implication obtained from the best interval representation of such t-norm, whenever this is an inclusion monotonic interval function. This provides, under this condition, a nice characterization for the best interval representation of an R-implication, which is also an interval-valued R-implication. We also introduce interval-valued automorphisms as the best interval representations of automorphisms. It is shown that interval automorphisms act on interval R-implications, generating other interval R-implications.
Approximate Sparse Recovery: Optimizing Time and Measurements A Euclidean approximate sparse recovery system consists of parameters $k,N$, an $m$-by-$N$ measurement matrix, $\bm{\Phi}$, and a decoding algorithm, $\mathcal{D}$. Given a vector, ${\mathbf x}$, the system approximates ${\mathbf x}$ by $\widehat {\mathbf x}=\mathcal{D}(\bm{\Phi} {\mathbf x})$, which must satisfy $|\widehat {\mathbf x} - {\mathbf x}|_2\le C |{\mathbf x} - {\mathbf x}_k|_2$, where ${\mathbf x}_k$ denotes the optimal $k$-term approximation to ${\mathbf x}$. (The output $\widehat{\mathbf x}$ may have more than $k$ terms.) For each vector ${\mathbf x}$, the system must succeed with probability at least 3/4. Among the goals in designing such systems are minimizing the number $m$ of measurements and the runtime of the decoding algorithm, $\mathcal{D}$. In this paper, we give a system with $m=O(k \log(N/k))$ measurements—matching a lower bound, up to a constant factor—and decoding time $k\log^{O(1)} N$, matching a lower bound up to a polylog$(N)$ factor. We also consider the encode time (i.e., the time to multiply $\bm{\Phi}$ by $x$), the time to update measurements (i.e., the time to multiply $\bm{\Phi}$ by a 1-sparse $x$), and the robustness and stability of the algorithm (resilience to noise before and after the measurements). Our encode and update times are optimal up to $\log(k)$ factors. The columns of $\bm{\Phi}$ have at most $O(\log^2(k)\log(N/k))$ nonzeros, each of which can be found in constant time. Our full result, a fully polynomial randomized approximation scheme, is as follows. If ${\mathbf x}={\mathbf x}_k+\nu_1$, where $\nu_1$ and $\nu_2$ (below) are arbitrary vectors (regarded as noise), then setting $\widehat {\mathbf x} = \mathcal{D}(\Phi {\mathbf x} + \nu_2)$, and for properly normalized $\bm{\Phi}$, we get $\left|{\mathbf x} - \widehat {\mathbf x}\right|_2^2 \le (1+\epsilon)\left|\nu_1\right|_2^2 + \epsilon\left|\nu_2\right|_2^2$ using $O((k/\epsilon)\log(N/k))$ measurements and $(k/\epsilon)\log^{O(1)}(N)$ time for decoding.
Qualitative spatial reasoning: a semi-quantitative approach using fuzzy logic Qualitative reasoning is useful as it facilitates reasoning with incomplete and weak information and aids the subsequent application of more detailed quantitative theories. Adoption of qualitative techniques for spatial reasoning can be very useful in situations where it is difficult to obtain precise informationand where there are real constraints of memory, time and hostile threats. This paper formulates a computational model for obtaining all induced spatial constraints on a set of landmarks, given a set of approximate quantitative and qualitative constraints on them, which may be incomplete, and perhaps even conflicting.
Analysis of frame-compatible subsampling structures for efficient 3DTV broadcast The evolution of the television market is led by 3DTV technology, and this tendency can accelerate during the next years according to expert forecasts. However, 3DTV delivery by broadcast networks is not currently developed enough, and acts as a bottleneck for the complete deployment of the technology. Thus, increasing interest is dedicated to stereo 3DTV formats compatible with current HDTV video equipment and infrastructure, as they may greatly encourage 3D acceptance. In this paper, different subsampling schemes for HDTV compatible transmission of both progressive and interlaced stereo 3DTV are studied and compared. The frequency characteristics and preserved frequency content of each scheme are analyzed, and a simple interpolation filter is specially designed. Finally, the advantages and disadvantages of the different schemes and filters are evaluated through quality testing on several progressive and interlaced video sequences.
1.005083
0.009604
0.009604
0.007017
0.004584
0.002839
0.002042
0.00118
0.000214
0.000035
0.000003
0
0
0
Combining Statistical Learning with a Knowledge-Based Approach - A Case Study in Intensive Care Monitoring The paper describes a case study in combiningdifferent methods for acquiring medicalknowledge. Given a huge amount of noisy,high dimensional numerical time series datadescribing patients in intensive care, the supportvector machine is used to learn whenand how to change the dose of which drug.Given medical knowledge about and expertisein clinical decision making, a first-orderlogic knowledge base about effects of therapeuticalinterventions has been built. As apreprocessing ...
Estimation of delay variations due to random-dopant fluctuations in nano-scaled CMOS circuits In nanoscale CMOS circuits the random dopant fluctuations (RDF) cause significant threshold voltage (Vt) variations in transistors. In this paper, we propose a semi-analytical estimation methodology to predict the delay distribution [Mean and Standard Deviation (STD)] of logic circuits considering Vt variation in transistors. The proposed method is fast and can be used to predict delay distributio...
Statistical design and optimization of SRAM cell for yield enhancement We have analyzed and modeled the failure probabilities of SRAM cells due to process parameter variations. A method to predict the yield of a memory chip based on the cell failure probability is proposed. The developed method is used in an early stage of a design cycle to minimize memory failure probability by statistically sizing of SRAM cell.
The impact of intrinsic device fluctuations on CMOS SRAM cell stability Reductions in CMOS SRAM cell static noise margin (SNM) due to intrinsic threshold voltage fluctuations in uniformly doped minimum-geometry cell MOSFETs are investigated for the first time using compact physical and stochastic models. Six sigma deviations in SNM due to intrinsic fluctuations alone are projected to exceed the nominal SMM for sub-100-nm CMOS technology generations. These large deviations pose severe barriers to scaling of supply voltage, channel length, and transistor count for conventional 6T SRAM-dominated CMOS ASICs and microprocessors
Statistical blockade: a novel method for very fast Monte Carlo simulation of rare circuit events, and its application Circuit reliability under statistical process variation is an area of growing concern. For highly replicated circuits such as SRAMs and flip flops, a rare statistical event for one circuit may induce a not-so-rare system failure. Existing techniques perform poorly when tasked to generate both efficient sampling and sound statistics for these rare events. Statistical Blockade is a novel Monte Carlo technique that allows us to efficiently filter---to block---unwanted samples insufficiently rare in the tail distributions we seek. The method synthesizes ideas from data mining and Extreme Value Theory, and shows speed-ups of 10X-100X over standard Monte Carlo.
Adaptive-Learning-Based Importance Sampling for Analog Circuit DPPM Estimation This paper addresses the important problem of defect level estimation. For more than 30 years, there have been published models which are commonly used to estimate the time zero test escape rate of digital logic designs. However, estimating escape rate for analog circuits is much more challenging. This paper applies importance sampling techniques to this problem to arrive at a much more practical method of analog defect level computation.
SPARE: a scalable algorithm for passive, structure preserving, parameter-aware model order reduction This paper describes a flexible and efficient new algorithm for model order reduction of parameterized systems. The method is based on the reformulation of the parameterized system as a perturbation-like parallel interconnection of the nominal transfer function and the nonparameterized transfer function sensitivities with respect to the parameter variations. Such a formulation reveals an explicit dependence on each parameter which is exploited by reducing each component system independently via a standard nonparameterized structure preserving algorithm. Therefore, the resulting smaller size interconnected system retains the structure of the original system with respect to parameter dependence. This allows for better accuracy control, enabling independent adaptive order determination with respect to each parameter and adding flexibility in simulation environments. It is shown that the method is efficiently scalable and preserves relevant system properties such as passivity. The new technique can handle fairly large parameter variations on systems whose outputs exhibit smooth dependence on the parameters, also allowing design space exploration to some degree. Several examples show that besides the added flexibility and control, when compared with competing algorithms, the proposed technique can, in some cases, produce smaller reduced models with potential accuracy gains.
Fast Variational Analysis of On-Chip Power Grids by Stochastic Extended Krylov Subspace Method This paper proposes a novel stochastic method for analyzing the voltage drop variations of on-chip power grid networks, considering lognormal leakage current variations. The new method, called StoEKS, applies Hermite polynomial chaos to represent the random variables in both power grid networks and input leakage currents. However, different from the existing orthogonal polynomial-based stochastic simulation method, extended Krylov subspace (EKS) method is employed to compute variational responses from the augmented matrices consisting of the coefficients of Hermite polynomials. Our contribution lies in the acceleration of the spectral stochastic method using the EKS method to fast solve the variational circuit equations for the first time. By using the reduction technique, the new method partially mitigates increased circuit-size problem associated with the augmented matrices from the Galerkin-based spectral stochastic method. Experimental results show that the proposed method is about two-order magnitude faster than the existing Hermite PC-based simulation method and many order of magnitudes faster than Monte Carlo methods with marginal errors. StoEKS is scalable for analyzing much larger circuits than the existing Hermit PC-based methods.
Variational capacitance modeling using orthogonal polynomial method In this paper, we propose a novel statistical capacitance extraction method for interconnects considering process variations. The new method, called statCap, is based on the spectral stochastic method where orthogonal polynomials are used to represent the statistical processes in a deterministic way. We first show how the variational potential coefficient matrix is represented in a first-order form using Taylor expansion and orthogonal decomposition. Then an augmented potential coefficient matrix, which consists of the coefficients of the polynomials, is derived. After that, corresponding augmented system is solved to obtain the variational capacitance values in the orthogonal polynomial form. Experimental results show that our method is two orders of magnitude faster than the recently proposed statistical capacitance extraction method based on the spectral stochastic collocation approach and many orders of magnitude faster than the Monte Carlo method for several practical interconnect structures.
Predicting Circuit Performance Using Circuit-level Statistical Timing Analysis
An Architecture for Compressive Imaging Compressive sensing is an emerging field based on the rev elation that a small group of non-adaptive linear projections of a compressible signal contains enough information for reconstruction and processing. In this paper, we propose algorithms and hardware to support a new theory of compressive imaging. Our approach is based on a new digital image/video camera that directly acquires random projections of the signal without first collecting the pixels/voxels. Our camera architecture employs a digital micromirror array to perform optical calculations of linear projections of an image onto pseudorandom binary patterns. Its hallmarks include the ability to obtain an image with a single detection element while measuring the image/video fewer times than the number of pixels this can significantly reduce the computation required for video acquisition/encoding. Because our system relies on a single photon detector, it can also be adapted to image at wavelengths that are currently impossible with conventional CCD and CMOS imagers. We are currently testing a proto type design for the camera and include experimental results.
Mixed-signal parallel compressed sensing and reception for cognitive radio A parallel structure to do spectrum sensing in cognitive radio (CR) at sub-Nyquist rate is proposed. The structure is based on compressed sensing (CS) that exploits the sparsity of frequency utilization. Specifically, the received analog signal is segmented or time-windowed and CS is applied to each segment independently using an analog implementation of the inner product, then all the samples are processed together to reconstruct the signal. Applying the CS framework to the analog signal directly relaxes the requirements in wideband RF receiver front-ends. Moreover, the parallel structure provides a design flexibility and scalability on the sensing rate and system complexity. This paper also provides a joint reconstruction algorithm that optimally detects the information symbols from the sub-Nyquist analog projection coefficients. Simulations showing the efficiency of the proposed approach are also presented.
Expression-Insensitive 3d Face Recognition Using Sparse Representation We present a face recognition method based on sparse representation for recognizing 3D face meshes under expressions using low-level geometric features. First, to enable the application of the sparse representation framework, we develop a uniform remeshing scheme to establish a consistent sampling pattern across 3D faces. To handle facial expressions, we design a feature pooling and ranking scheme to collect various types of low-level geometric features and rank them according to their sensitivities to facial expressions. By simply applying the sparse representation framework to the collected low-level features, our proposed method already achieves satisfactory recognition rates, which demonstrates the efficacy of the framework for 3D face recognition. To further improve results in the presence of severe facial expressions, we show that by choosing higher-ranked, i.e., expression-insensitive, features, the recognition rates approach those for neutral faces, without requiring an extensive set of reference faces for each individual to cover possible variations caused by expressions as proposed in previous work We apply our face recognition method to the GavabDB and FRGC 2.0 databases and demonstrate encouraging results.
Some general comments on fuzzy sets of type-2 This paper contains some general comments on the algebra of truth values of fuzzy sets of type 2. It details the precise mathematical relationship with the algebras of truth values of ordinary fuzzy sets and of interval-valued fuzzy sets. Subalgebras of the algebra of truth values and t-norms on them are discussed. There is some discussion of finite type-2 fuzzy sets. © 2008 Wiley Periodicals, Inc.
1.100951
0.043339
0.020381
0.010777
0.005544
0.00026
0.000144
0.000082
0.000047
0.000017
0
0
0
0
Type-2 operations on finite chains The algebra of truth values for fuzzy sets of type-2 consists of all mappings from the unit interval into itself, with operations certain convolutions of these mappings with respect to pointwise max and min. This algebra generalizes the truth-value algebras of both type-1 and of interval-valued fuzzy sets, and has been studied rather extensively both from a theoretical and applied point of view. This paper addresses the situation when the unit interval is replaced by two finite chains. Most of the basic theory goes through, but there are several special circumstances of interest. These algebras are of interest on two counts, both as special cases of bases for fuzzy theories, and as mathematical entities per se.
Negations on type-2 fuzzy sets. So far, the negation that usually has been considered within the type-2 fuzzy sets (T2FSs) framework, and hence T2FS truth values M (set of all functions from [0,1] to [0,1]), was obtained by means of Zadeh's extension principle and calculated from standard negation in [0,1]. But there has been no comparative analysis of the properties that hold for the above operation and the axioms that any negation in M should satisfy. This suggests that negations should be studied more thoroughly in this context. Following on from this, we introduce in this paper the axioms that an operation in M must satisfy to qualify as a negation and then prove that the usual negation on T2FSs, in particular, is antimonotonic in L (set of normal and convex functions of M) but not in M. We propose a family of operations calculated from any suprajective negation in [0,1] and prove that they are negations in L. Finally, we examine De Morgan's laws for some operations with respect to these negations.
Categories with fuzzy sets and relations. We define a 2-category whose objects are fuzzy sets and whose maps are relations subject to certain natural conditions. We enrich this category with additional monoidal and involutive structure coming from t-norms and negations on the unit interval. We develop the basic properties of this category and consider its relation to other familiar categories. A discussion is made of extending these results to the setting of type-2 fuzzy sets.
Convex normal functions revisited The lattice L"u of upper semicontinuous convex normal functions with convolution ordering arises in studies of type-2 fuzzy sets. In 2002, Kawaguchi and Miyakoshi [Extended t-norms as logical connectives of fuzzy truth values, Multiple-Valued Logic 8(1) (2002) 53-69] showed that this lattice is a complete Heyting algebra. Later, Harding et al. [Lattices of convex, normal functions, Fuzzy Sets and Systems 159 (2008) 1061-1071] gave an improved description of this lattice and showed it was a continuous lattice in the sense of Gierz et al. [A Compendium of Continuous Lattices, Springer, Berlin, 1980]. In this note we show the lattice L"u is isomorphic to the lattice of decreasing functions from the real unit interval [0,1] to the interval [0,2] under pointwise ordering, modulo equivalence almost everywhere. This allows development of further properties of L"u. It is shown that L"u is completely distributive, is a compact Hausdorff topological lattice whose topology is induced by a metric, and is self-dual via a period two antiautomorphism. We also show the lattice L"u has another realization of natural interest in studies of type-2 fuzzy sets. It is isomorphic to a quotient of the lattice L of all convex normal functions under the convolution ordering. This quotient identifies two convex normal functions if they agree almost everywhere and their intervals of increase and decrease agree almost everywhere.
A comparative study of ranking methods, similarity measures and uncertainty measures for interval type-2 fuzzy sets Ranking methods, similarity measures and uncertainty measures are very important concepts for interval type-2 fuzzy sets (IT2 FSs). So far, there is only one ranking method for such sets, whereas there are many similarity and uncertainty measures. A new ranking method and a new similarity measure for IT2 FSs are proposed in this paper. All these ranking methods, similarity measures and uncertainty measures are compared based on real survey data and then the most suitable ranking method, similarity measure and uncertainty measure that can be used in the computing with words paradigm are suggested. The results are useful in understanding the uncertainties associated with linguistic terms and hence how to use them effectively in survey design and linguistic information processing.
Anaphora for everyone: pronominal anaphora resoluation without a parser We present an algorithm for anaphora resolution which is a modified and extended version of that developed by (Lappin and Leass, 1994). In contrast to that work, our algorithm does not require in-depth, full, syntactic parsing of text. Instead, with minimal compromise in output quality, the modifications enable the resolution process to work from the output of a part of speech tagger, enriched only with annotations of grammatical function of lexical items in the input text stream. Evaluation of the results of our implementation demonstrates that accurate anaphora resolution can be realized within natural language processing frameworks which do not---or cannot--- employ robust and reliable parsing components.
NIST Net: a Linux-based network emulation tool Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and experimentation with network code through emulation. NIST Net enables experimenters to model and effect arbitrary performance dynamics (packet delay, jitter, bandwidth limitations, congestion, packet loss and duplication) on live IP packets passing through a commodity Linux-based PC router. We describe the emulation capabilities of NIST Net; examine its architecture; and discuss some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead. Calibration results are provided to quantify the fidelity and performance of NIST Net over a wide range of offered loads (up to 1 Gbps), and a diverse set of emulated performance dynamics.
Compressive wireless sensing General Terms Compressive Sampling is an emerging theory that is based on the fact that a relatively small number of random pro-jections of a signal can contain most of its salient informa-tion. In this paper, we introduce the concept of Compressive Wireless Sensing for sensor networks in which a fusion center retrieves signal field information from an ensemble of spa-tially distributed sensor nodes. Energy and bandwidth are scarce resources in sensor networks and the relevant metrics of interest in our context are 1) the latency involved in in-formation retrieval; and 2) the associated power-distortion trade-o. It is generally recognized that given su cient prior knowledge about the sensed data (e. g., statistical character-ization, homogeneity etc. ), there exist schemes that have very favorable power-distortion-latency trade-o s. We pro-pose a distributed matched source-channel communication scheme, based in part on recent results in compressive sam-pling theory, for estimation of sensed data at the fusion cen-ter and analyze, as a function of number of sensor nodes, the trade-o s between power, distortion and latency. Compres-sive wireless sensing is a universal scheme in the sense that it requires no prior knowledge about the sensed data. This universality, however, comes at the cost of optimality (in terms of a less favorable power-distortion-latency trade-o ) and we quantify this cost relative to the case when su cient prior information about the sensed data is assumed.
Analysis of the domain mapping method for elliptic diffusion problems on random domains. In this article, we provide a rigorous analysis of the solution to elliptic diffusion problems on random domains. In particular, based on the decay of the Karhunen-Loève expansion of the domain perturbation field, we establish decay rates for the derivatives of the random solution that are independent of the stochastic dimension. For the implementation of a related approximation scheme, like quasi-Monte Carlo quadrature, stochastic collocation, etc., we propose parametric finite elements to compute the solution of the diffusion problem on each individual realization of the domain generated by the perturbation field. This simplifies the implementation and yields a non-intrusive approach. Having this machinery at hand, we can easily transfer it to stochastic interface problems. The theoretical findings are complemented by numerical examples for both, stochastic interface problems and boundary value problems on random domains.
Proceedings of the 41th Design Automation Conference, DAC 2004, San Diego, CA, USA, June 7-11, 2004
Random Alpha Pagerank We suggest a revision to the PageRank random surfer model that considers the influence of a population of random surfers on the PageRank vector. In the revised model, each member of the population has its own teleportation parameter chosen from a probability distribution, and consequently, the ranking vector is random. We propose three algorithms for computing the statistics of the random ranking vector based respectively on (i) random sampling, (ii) paths along the links of the underlying graph, and (iii) quadrature formulas. We find that the expectation of the random ranking vector produces similar rankings to its deterministic analogue, but the standard deviation gives uncorrelated information (under a Kendall-tau metric) with myriad potential uses. We examine applications of this model to web spam.
Directional relative position between objects in image processing: a comparison between fuzzy approaches The importance of describing relationships between objects has been highlighted in works in very different areas, including image understanding. Among these relationships, directional relative position relations are important since they provide an important information about the spatial arrangement of objects in the scene. Such concepts are rather ambiguous, they defy precise definitions, but human beings have a rather intuitive and common way of understanding and interpreting them. Therefore in this context, fuzzy methods are appropriate to provide consistent definitions that integrate both quantitative and qualitative knowledge, thus providing a computational representation and interpretation of imprecise spatial relations, expressed in a linguistic way, and including quantitative knowledge. Several fuzzy approaches have been developed in the literature, and the aim of this paper is to review and compare them according to their properties and according to the types of questions they seek to answer.
Fuzzy Concepts and Formal Methods: A Fuzzy Logic Toolkit for Z It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However some system problems, particularly those drawn from the IS problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper suggests fuzzy set theory as a possible representation scheme for this imprecision or approximation. We provide a summary of a toolkit that defines the operators, measures and modifiers necessary for the manipulation of fuzzy sets and relations. We also provide some examples of the laws which establishes an isomorphism between the extended notation presented here and conventional Z when applied to boolean sets and relations.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.1
0.1
0.1
0.016667
0.003125
0
0
0
0
0
0
0
0
0
Analysis and design of the google congestion control for web real-time communication (WebRTC) Video conferencing applications require low latency and high bandwidth. Standard TCP is not suitable for video conferencing since its reliability and in order delivery mechanisms induce large latency. Recently the idea of using the delay gradient to infer congestion is appearing again and is gaining momentum. In this paper we present an algorithm that is based on estimating through a Kalman filter the end-to-end one way delay variation which is experienced by packets traveling from a sender to a destination. This estimate is compared to an adaptive threshold to dynamically throttle the sending rate. The control algorithm has been implemented over the RTP/RTCP protocol and is currently used in Google Hangouts and in the Chrome WebRTC stack. Experiments have been carried out to evaluate the algorithm performance in the case of variable link capacity, presence of heterogeneous or homogeneous concurrent traffic, and backward path traffic.
A Quality-of-Experience Index for Streaming Video. With the rapid growth of streaming media applications, there has been a strong demand of quality-of-experience (QoE) measurement and QoE-driven video delivery technologies. Most existing methods rely on bitrate and global statistics of stalling events for QoE prediction. This is problematic for two reasons. First, using the same bitrate to encode different video content results in drastically diff...
A Control-Theoretic Approach for Dynamic Adaptive Video Streaming over HTTP User-perceived quality-of-experience (QoE) is critical in Internet video applications as it impacts revenues for content providers and delivery systems. Given that there is little support in the network for optimizing such measures, bottlenecks could occur anywhere in the delivery system. Consequently, a robust bitrate adaptation algorithm in client-side players is critical to ensure good user experience. Previous studies have shown key limitations of state-of-art commercial solutions and proposed a range of heuristic fixes. Despite the emergence of several proposals, there is still a distinct lack of consensus on: (1) How best to design this client-side bitrate adaptation logic (e.g., use rate estimates vs. buffer occupancy); (2) How well specific classes of approaches will perform under diverse operating regimes (e.g., high throughput variability); or (3) How do they actually balance different QoE objectives (e.g., startup delay vs. rebuffering). To this end, this paper makes three key technical contributions. First, to bring some rigor to this space, we develop a principled control-theoretic model to reason about a broad spectrum of strategies. Second, we propose a novel model predictive control algorithm that can optimally combine throughput and buffer occupancy information to outperform traditional approaches. Third, we present a practical implementation in a reference video player to validate our approach using realistic trace-driven emulations.
Towards A Qoe-Driven Resource Control In Lte And Lte-A Networks We propose a novel architecture for providing quality of experience (QoE) awareness to mobile operator networks. In particular, we describe a possible architecture for QoE-driven resource control for long-term evolution (LTE) and LTE-advanced networks, including a selection of KPIs to be monitored in different network elements. We also provide a description and numerical results of the QoE evaluation process for different data services as well as potential use cases that would benefit from the rollout of the proposed framework.
QoX: What is it really? The article puts in order notions related to Quality of Service that are found in documents on service requirements. Apart from presenting a detailed description of QoS itself, it overviews classes of service (CoS) proposed by main standardization bodies and maps them across various transmission technologies. Standards and concepts related to less commonly used, though not less important, terms su...
Anaphora for everyone: pronominal anaphora resoluation without a parser We present an algorithm for anaphora resolution which is a modified and extended version of that developed by (Lappin and Leass, 1994). In contrast to that work, our algorithm does not require in-depth, full, syntactic parsing of text. Instead, with minimal compromise in output quality, the modifications enable the resolution process to work from the output of a part of speech tagger, enriched only with annotations of grammatical function of lexical items in the input text stream. Evaluation of the results of our implementation demonstrates that accurate anaphora resolution can be realized within natural language processing frameworks which do not---or cannot--- employ robust and reliable parsing components.
Counter braids: a novel counter architecture for per-flow measurement Fine-grained network measurement requires routers and switches to update large arrays of counters at very high link speed (e.g. 40 Gbps). A naive algorithm needs an infeasible amount of SRAM to store both the counters and a flow-to-counter association rule, so that arriving packets can update corresponding counters at link speed. This has made accurate per-flow measurement complex and expensive, and motivated approximate methods that detect and measure only the large flows. This paper revisits the problem of accurate per-flow measurement. We present a counter architecture, called Counter Braids, inspired by sparse random graph codes. In a nutshell, Counter Braids "compresses while counting". It solves the central problems (counter space and flow-to-counter association) of per-flow measurement by "braiding" a hierarchy of counters with random graphs. Braiding results in drastic space reduction by sharing counters among flows; and using random graphs generated on-the-fly with hash functions avoids the storage of flow-to-counter association. The Counter Braids architecture is optimal (albeit with a complex decoder) as it achieves the maximum compression rate asymptotically. For implementation, we present a low-complexity message passing decoding algorithm, which can recover flow sizes with essentially zero error. Evaluation on Internet traces demonstrates that almost all flow sizes are recovered exactly with only a few bits of counter space per flow.
MapReduce: simplified data processing on large clusters MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.
Numerical Integration using Sparse Grids We present and review algorithms for the numerical integration of multivariatefunctions defined over d--dimensional cubes using several variantsof the sparse grid method first introduced by Smolyak [51]. In this approach,multivariate quadrature formulas are constructed using combinationsof tensor products of suited one--dimensional formulas. The computingcost is almost independent of the dimension of the problem if thefunction under consideration has bounded mixed derivatives. We suggest...
Optimal design of a CMOS op-amp via geometric programming We describe a new method for determining component values and transistor dimensions for CMOS operational amplifiers (op-amps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result, the amplifier design problem can be expressed as a special form of optimization problem called geometric programming, for which very efficient global optimization methods have been developed. As a consequence we can efficiently determine globally optimal amplifier designs or globally optimal tradeoffs among competing performance measures such as power, open-loop gain, and bandwidth. Our method, therefore, yields completely automated sizing of (globally) optimal CMOS amplifiers, directly from specifications. In this paper, we apply this method to a specific widely used operational amplifier architecture, showing in detail how to formulate the design problem as a geometric program. We compute globally optimal tradeoff curves relating performance measures such as power dissipation, unity-gain bandwidth, and open-loop gain. We show how the method can he used to size robust designs, i.e., designs guaranteed to meet the specifications for a variety of process conditions and parameters
Sparse Reconstruction by Separable Approximation Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.
Design of interval type-2 fuzzy models through optimal granularity allocation In this paper, we offer a new design methodology of type-2 fuzzy models whose intent is to effectively exploit the uncertainty of non-numeric membership functions. A new performance index, which guides the development of the fuzzy model, is used to navigate the construction of the fuzzy model. The underlying idea is that an optimal granularity allocation throughout the membership functions used in the fuzzy model leads to the best design. In contrast to the commonly utilized criterion where one strives for the highest accuracy of the model, the proposed index is formed in such a way so that the type-2 fuzzy model produced intervals, which ''cover'' the experimental data and at the same time are made as narrow (viz. specific) as possible. Genetic algorithm is proposed to automate the design process and further improve the results by carefully exploiting the search space. Experimental results show the efficiency of the proposed design methodology.
Process variability-aware transient fault modeling and analysis Due to reduction in device feature size and supply voltage, the sensitivity of digital systems to transient faults is increasing dramatically. As technology scales further, the increase in transistor integration capacity also leads to the increase in process and environmental variations. Despite these difficulties, it is expected that systems remain reliable while delivering the required performance. Reliability and variability are emerging as new design challenges, thus pointing to the importance of modeling and analysis of transient faults and variation sources for the purpose of guiding the design process. This work presents a symbolic approach to modeling the effect of transient faults in digital circuits in the presence of variability due to process manufacturing. The results show that using a nominal case and not including variability effects, can underestimate the SER by 5% for the 50% yield point and by 10% for the 90% yield point.
Split Bregman iterative algorithm for sparse reconstruction of electrical impedance tomography In this paper, we present an evaluation of the use of split Bregman iterative algorithm for the L"1-norm regularized inverse problem of electrical impedance tomography. Simulations are performed to validate that our algorithm is competitive in terms of the imaging quality and computational speed in comparison with several state-of-the-art algorithms. Results also indicate that in contrast to the conventional L"2-norm regularization method and total variation (TV) regularization method, the L"1-norm regularization method can sharpen the edges and is more robust against data noises.
1.11
0.025
0.003571
0.0004
0.000133
0
0
0
0
0
0
0
0
0
Template-Free Symbolic Performance Modeling of Analog Circuits via Canonical-Form Functions and Genetic Programming This paper presents CAFFEINE, a method to automatically generate compact interpretable symbolic performance models of analog circuits with no prior specification of an equation template. CAFFEINE uses SPICE simulation data to model arbitrary nonlinear circuits and circuit characteristics. CAFFEINE expressions are canonical-form functions: product-of-sum layers alternating with sum-of-product layers, as defined by a grammar. Multiobjective genetic programming trades off error with model complexity. On test problems, CAFFEINE models demonstrate lower prediction error than posynomials, splines, neural networks, kriging, and support vector machines. This paper also demonstrates techniques to scale CAFFEINE to larger problems.
Bayesian Model Fusion: A statistical framework for efficient pre-silicon validation and post-silicon tuning of complex analog and mixed-signal circuits In this paper, we describe a novel statistical framework, referred to as Bayesian Model Fusion (BMF), that allows us to minimize the simulation and/or measurement cost for both pre-silicon validation and post-silicon tuning of analog and mixed-signal (AMS) circuits with consideration of large-scale process variations. The BMF technique is motivated by the fact that today's AMS design cycle typically spans multiple stages (e.g., schematic design, layout design, first tape-out, second tape-out, etc.). Hence, we can reuse the simulation and/or measurement data collected at an early stage to facilitate efficient validation and tuning of AMS circuits with a minimal amount of data at the late stage. The efficacy of BMF is demonstrated by using several industrial circuit examples.
Co-Learning Bayesian Model Fusion: Efficient Performance Modeling of Analog and Mixed-Signal Circuits Using Side Information Efficient performance modeling of today's analog and mixed-signal (AMS) circuits is an important yet challenging task. In this paper, we propose a novel performance modeling algorithm that is referred to as Co-Learning Bayesian Model Fusion (CL-BMF). The key idea of CL-BMF is to take advantage of the additional information collected from simulation and/or measurement to reduce the performance modeling cost. Different from the traditional performance modeling approaches which focus on the prior information of model coefficients (i.e. the coefficient side information) only, CL-BMF takes advantage of another new form of prior knowledge: the performance side information. In particular, CL-BMF combines the coefficient side information, the performance side information and a small number of training samples through Bayesian inference based on a graphical model. Two circuit examples designed in a commercial 32nm SOI CMOS process demonstrate that CL-BMF achieves up to 5X speed-up over other state-of-the-art performance modeling techniques without surrendering any accuracy.
Beyond low-order statistical response surfaces: latent variable regression for efficient, highly nonlinear fitting The number and magnitude of process variation sources are increasing as we scale further into the nano regime. Today's most successful response surface methods limit us to low-order forms -- linear, quadratic -- to make the fitting tractable. Unfortunately, not all variation-al scenarios are well modeled with low-order surfaces. We show how to exploit latent variable regression ideas to support efficient extraction of arbitrarily nonlinear statistical response surfaces. An implementation of these ideas called SiLVR, applied to a range of analog and digital circuits, in technologies from 90 to 45nm, shows significant improvements in prediction, with errors reduced by up to 21X, with very reasonable runtime costs.
Scalable and efficient analog parametric fault identification Analog circuits embedded in large mixed-signal designs can fail due to unexpected process parameter excursions. To evaluate manufacturing tests in terms of their ability to detect such failures, parametric faults leading to circuit failures should be identified. This paper proposes an iterative sampling method to identify these faults in large-scale analog circuits with a constrained simulation budget. Experiment results on two circuits from a serial IO interface demonstrate the effectiveness of the methodology. The proposed method identifies a significantly larger and diverse set of critical parametric faults compared to a Monte Carlo-based approach for identical computational budget, particularly for cases involving significant process variations.
Statistical regression for efficient high-dimensional modeling of analog and mixed-signal performance variations The continuous technology scaling brings about high-dimensional performance variations that cannot be easily captured by the traditional response surface modeling. In this paper we propose a new statistical regression (STAR) technique that applies a novel strategy to address this high dimensionality issue. Unlike most traditional response surface modeling techniques that solve model coefficients from over-determined linear equations, STAR determines all unknown coefficients by moment matching. As such, a large number of (e.g., 103~105) model coefficients can be extracted from a small number of (e.g., 102~103) sampling points without over-fitting. In addition, a novel recursive estimator is proposed to accurately and efficiently predict the moment values. The proposed recursive estimator is facilitated by exploiting the interaction between different moment estimators and formulating the moment estimation problem into a special form that can be iteratively solved. Several circuit examples designed in commercial CMOS processes demonstrate that STAR achieves more than 20x runtime speedup compared with the traditional response surface modeling.
Classifying circuit performance using active-learning guided support vector machines Leveraging machine learning has been proven as a promising avenue for addressing many practical circuit design and verification challenges. We demonstrate a novel active learning guided machine learning approach for characterizing circuit performance. When employed under the context of support vector machines, the proposed probabilistically weighted active learning approach is able to dramatically reduce the size of the training data, leading to significant reduction of the overall training cost. The proposed active learning approach is extended to the training of asymmetric support vector machine classifiers, which is further sped up by a global acceleration scheme. We demonstrate the excellent performance of the proposed techniques using three case studies: PLL lock-time verification, SRAM yield analysis and prediction of chip peak temperature using a limited number of on-chip temperature sensors.
Measurement and characterization of pattern dependent process variations of interconnect resistance, capacitance and inductance in nanometer technologies Process variations have become a serious concern for nanometer technologies. The interconnect and device variations include inter-and intra-die variations of geometries, as well as process and electrical parameters. In this paper, pattern (i.e. density, width and space) dependent interconnect thickness and width variations are studied based on a well-designed test chip in a 90 nm technology. The parasitic resistance and capacitance variations due to the process variations are investigated, and process-variation-aware extraction techniques are proposed. In the test chip, electrical and physical measurements show strong metal thickness and width variations mainly due to chemical mechanical polishing (CMP) in nanometer technologies. The loop inductance dependence of return patterns is also validated in the test chip. The proposed new characterization methods extract interconnect RC variations as a function of metal density, width and space. Simulation results show excellent agreement between on-wafer measurements and extractions of various RC structures, including a set of metal loaded/unloaded ring oscillators in a complex wiring environment.
Generalized Krylov recycling methods for solution of multiple related linear equation systems in electromagnetic analysis In this paper we propose methods for fast iterative solution of multiple related linear systems of equations. Such systems arise, for example, in building pattern libraries for interconnect parasitic extraction, parasitic extraction under process variation, and parameterized interconnect characterization. Our techniques are based on a generalized form of "recycled" Krylov subspace methods that use sharing of information between related systems of equations to accelerate the iterative solution. Experimental results on electromagnetics problems demonstrate that the proposed method can achieve a speed-up of 5X~30X compared to direct GMRES applied sequentially to the individual systems. These methods are generic, fully treat nonlinear perturbations without approximation, and can be applied in a wide variety of application domains outside electromagnetics.
Statistical timing analysis for intra-die process variations with spatial correlations Process variations have become a critical issue in performance verification of high-performance designs. We present a new, statistical timing analysis method that accounts for inter- and intra-die process variations and their spatial correlations. Since statistical timing analysis has an exponential run time complexity, we propose a method whereby a statistical bound on the probability distribution function of the exact circuit delay is computed with linear run time. First, we develop a model for representing inter- and intra-die variations and their spatial correlations. Using this model, we then show how gate delays and arrival times can be represented as a sum of components, such that the correlation information between arrival times and gate delays is preserved. We then show how arrival times are propagated and merged in the circuit to obtain an arrival time distribution that is an upper bound on the distribution of the exact circuit delay. We prove the correctness of the bound and also show how the bound can be improved by propagating multiple arrival times. The proposed algorithms were implemented and tested on a set of benchmark circuits under several process variation scenarios. The results were compared with Monte Carlo simulation and show an accuracy of 3.32% on average over all test cases.
A First-Order Smoothed Penalty Method for Compressed Sensing We propose a first-order smoothed penalty algorithm (SPA) to solve the sparse recovery problem $\min\{\|x\|_1:Ax=b\}$. SPA is efficient as long as the matrix-vector product $Ax$ and $A^{T}y$ can be computed efficiently; in particular, $A$ need not have orthogonal rows. SPA converges to the target signal by solving a sequence of penalized optimization subproblems, and each subproblem is solved using Nesterov's optimal algorithm for simple sets [Yu. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, Kluwer Academic Publishers, Norwell, MA, 2004] and [Yu. Nesterov, Math. Program., 103 (2005), pp. 127-152]. We show that the SPA iterates $x_k$ are $\epsilon$-feasible; i.e. $\|Ax_k-b\|_2\leq\epsilon$ and $\epsilon$-optimal; i.e. $|~\|x_k\|_1-\|x^\ast\|_1|\leq\epsilon$ after $\tilde{\mathcal{O}}(\epsilon^{-\frac{3}{2}})$ iterations. SPA is able to work with $\ell_1$, $\ell_2$, or $\ell_{\infty}$ penalty on the infeasibility, and SPA can be easily extended to solve the relaxed recovery problem $\min\{\|x\|_1:\|Ax-b\|_2\leq\delta\}$.
On intersection sets in desarguesian affine spaces Lower bounds on the size of t-fold blocking sets with respect to hyperplanes or t-intersection sets in AG(n;q) are obtained, some of which are sharp.
Estimation of flexible fuzzy GARCH models for conditional density estimation In this work we introduce a new flexible fuzzy GARCH model for conditional density estimation. The model combines two different types of uncertainty, namely fuzziness or linguistic vagueness, and probabilistic uncertainty. The probabilistic uncertainty is modeled through a GARCH model while the fuzziness or linguistic vagueness is presented in the antecedent and combination of the rule base system. The fuzzy GARCH model under study allows for a linguistic interpretation of the gradual changes in the output density, providing a simple understanding of the process. Such a system can capture different properties of data, such as fat tails, skewness and multimodality in one single model. This type of models can be useful in many fields such as macroeconomic analysis, quantitative finance and risk management. The relation to existing similar models is discussed, while the properties, interpretation and estimation of the proposed are provided. The model performance is illustrated in simulated time series data exhibiting complex behavior and a real data application of volatility forecasting for the S&P 500 daily returns series.
3D visual experience oriented cross-layer optimized scalable texture plus depth based 3D video streaming over wireless networks. •A 3D experience oriented 3D video cross-layer optimization method is proposed.•Networking-related 3D visual experience model for 3D video streaming is presented.•3D video characteristics are fully considered in the cross-layer optimization.•MAC layer channel allocation and physical layer MCS are systematically optimized.•Results show that our method obtains superior 3D visual experience to others.
1.026191
0.017523
0.0175
0.0113
0.00933
0.005008
0.000833
0.000193
0.000081
0.000012
0
0
0
0
Parametric yield maximization using gate sizing based on efficient statistical power and delay gradient computation With the increased significance of leakage power and performance variability, the yield of a design is becoming constrained both by power and performance limits, thereby significantly complicating circuit optimization. In this paper, we propose a new optimization method for yield optimization under simultaneous leakage power and performance limits. The optimization approach uses a novel leakage power and performance analysis that is statistical in nature and considers the correlation between leakage power and performance to enable accurate computation of circuit yield under power and delay limits. We then propose a new heuristic approach to incrementally compute the gradient of yield with respect to gate sizes in the circuit with high efficiency and accuracy. We then show how this gradient information can be effectively used by a non-linear optimizer to perform yield optimization. We consider both inter-die and intra-die variations with correlated and random components. The proposed approach is implemented and tested and we demonstrate up to 40% yield improvement compared to a deterministically optimized circuit.
Timing Criticality For Timing Yield Optimization Block-based SSTA analyzes the timing variation of a chip caused by process variations effectively. However, block-based SSTA cannot identify critical nodes, nodes that highly influence the timing yield of a chip, used as the effective guidance of timing yield optimization. In this paper, we propose a new timing criticality to identify those nodes, referred to as the timing yield criticality (TYC). The proposed TYC is defined as the change in the timing yield, which is induced by the change in the mean arrival time at a node. For efficiency, we estimate the TYC through linear approximation instead of propagating the changed arrival time at a node to its fallouts. In experiments using the ISCAS 85 benchmark circuits, the proposed method estimated TYCs with the expense of 9.8% of the runtime for the exact computation. The proposed method identified the node that gives the greatest effect on the timing yield in all benchmark circuits, except C6288, while existing methods did not identify that for any circuit. In addition, the proposed method identified 98.4% of the critical nodes in the top 1% in the effect on the timing yield, while existing methods identified only about 10%.
Optimization objectives and models of variation for statistical gate sizing This paper approaches statistical optimization by examining gate delay variation models and optimization objectives. Most previous work on statistical optimization has focused exclusively on the optimization algorithms without considering the effects of the variation models and objective functions. This work empirically derives a simple variation model that is then used to optimize for robustness. Optimal results from example circuits used to study the effect of the statistical objective function on parametric yield.
Statistical leakage minimization through joint selection of gate sizes, gate lengths and threshold voltage This paper proposes a novel methodology for statistical leakage minimization of digital circuits. A function of mean and variance of the circuit leakage is minimized with constraint on α-percentile of the delay using physical delay models. Since the leakage is a strong function of the threshold voltage and gate length, considering them as design variables can provide significant amount of power savings. The leakage minimization problem is formulated as a multivariable convex optimization problem. We demonstrate that statistical optimization can lead to more than 37% savings in nominal leakage compared to worst-case techniques that perform only gate sizing.
On path-based learning and its applications in delay test and diagnosis This paper describes the implementation of a novel path-based learning methodology that can be applied for two purposes: (1) In a pre-silicon simulation environment, path-based learning can be used to produce a fast and approximate simulator for statistical timing simulation. (2) In post-silicon phase, path-based learning can be used as a vehicle to derive critical paths based on the pass/fail behavior observed from the test chips. Our path-based learning methodology consists of four major components: a delay test pattern set, a logic simulator, a set of selected paths as the basis for learning, and a machine learner. We explain the key concepts in this methodology and present experimental results to demonstrate its feasibility and applications.
Fast min-cost buffer insertion under process variations Process variation has become a critical problem in modern VLSI fabrication. In the presence of process variation, buffer insertion problem under performance constraints becomes more difficult since the solution space expands greatly. We propose efficient dynamic programming approaches to handle the min-cost buffer insertion under process variations. Our approaches handle delay constraints and slew constraints, in trees and in combinational circuits. The experimental results demonstrate that in general, process variations have great impact on slew-constrained buffering, but much less impact on delay-constrained buffering, especially for small nets. Our approaches have less than 9% runtime overhead on average compared with a single pass of deterministic buffering for delay constrained buffering, and get 56% yield improvement and 11.8% buffer area reduction, on average, for slew constrained buffering.
Clustering based pruning for statistical criticality computation under process variations We present a new linear time technique to compute criticality information in a timing graph by dividing it into "zones". Errors in using tightness probabilities for criticality computation are dealt with using a new clustering based pruning algorithm which greatly reduces the size of circuit-level cutsets. Our clustering algorithm gives a 150X speedup compared to a pairwise pruning strategy in addition to ordering edges in a cutset to reduce errors due to Clark's MAX formulation. The clustering based pruning strategy coupled with a localized sampling technique reduces errors to within 5% of Monte Carlo simulations with large speedups in runtime.
A unified framework for statistical timing analysis with coupling and multiple input switching As technology scales to smaller dimensions, increasing process variations, coupling induced delay variations and multiple input switching effects make timing verification extremely challenging. In this paper, we establish a theoretical framework for statistical timing analysis with coupling and multiple input switching. We prove the convergence of our proposed iterative approach and discuss implementation issues under the assumption of a Gaussian distribution for the parameters of variation. A statistical timer based on our proposed approach is developed and experimental results are presented for the IS-CAS benchmarks. We juxtapose our timer with a single pass, non iterative statistical timer that does not consider the mutual dependence of coupling with timing and another statistical timer that handles coupling deterministically. Monte Carlo simulations reveal a distinct gain (up to 24%) in accuracy by our approach in comparison to the others mentioned.
Transistor-specific delay modeling for SSTA SSTA has received a considerable amount of attention in recent years. However, it is a general rule that any approach can only be as accurate as the underlying models. Thus, variation models are an important research topic, in addition to the development of statistical timing tools. These models attempt to predict fluctuations in parameters like doping concentration, critical dimension (CD), and ILD thickness, as well as their spatial correlations. Modeling CD variation is a difficult problem because it contains a systematic component that is context dependent as well as a probabilistic component that is caused by exposure and defocus variation. Since these variations are dependent on topology, modern-day designs can potentially contain thousands of unique CD distributions. To capture all of the individual CD distributions within statistical timing, a transistor-specific model is required. However, statistical CD models used in industry today do not distinguish between transistors contained within different standard cell types (at the same location in a die), nor do they distinguish between transistors contained within the same standard cell. In this work we verify that the current methodology is error-prone using a 90nm industrial library and lithography recipe (with industrial OPC) and propose a new SSTA delay model that on average reduces error of standard deviation from 11.8% to 4.1% when the total variation (σ/μ) is 4.9% - a 2.9X reduction. Our model is compatible with existing SSTA techniques and can easily incorporate other sources of variation such as random dopant fluctuation and line-edge roughness.
A block rational Arnoldi algorithm for multipoint passive model-order reduction of multiport RLC networks Work in the area of model-order reduction for RLC interconnect networks has focused on building reduced-order models that preserve the circuit-theoretic properties of the network, such as stability, passivity, and synthesizability (Silveira et al., 1996). Passivity is the one circuit-theoretic property that is vital for the successful simulation of a large circuit netlist containing reduced-order models of its interconnect networks. Non-passive reduced-order models may lead to instabilities even if they are themselves stable. We address the problem of guaranteeing the accuracy and passivity of reduced-order models of multiport RLC networks at any finite number of expansion points. The novel passivity-preserving model-order reduction scheme is a block version of the rational Arnoldi algorithm (Ruhe, 1994). The scheme reduces to that of (Odabasioglu et al., 1997) when applied to a single expansion point at zero frequency. Although the treatment of this paper is restricted to expansion points that are on the negative real axis, it is shown that the resulting passive reduced-order model is superior in accuracy to the one that would result from expanding the original model around a single point. Nyquist plots are used to illustrate both the passivity and the accuracy of the reduced order models.
Breakdown of equivalence between the minimal l1-norm solution and the sparsest solution Finding the sparsest solution to a set of underdetermined linear equations is NP-hard in general. However, recent research has shown that for certain systems of linear equations, the sparsest solution (i.e. the solution with the smallest number of nonzeros), is also the solution with minimal l1 norm, and so can be found by a computationally tractable method.For a given n by m matrix Φ defining a system y=Φα, with n making the system underdetermined, this phenomenon holds whenever there exists a 'sufficiently sparse' solution α0. We quantify the 'sufficient sparsity' condition, defining an equivalence breakdown point (EBP): the degree of sparsity of α required to guarantee equivalence to hold; this threshold depends on the matrix Φ.In this paper we study the size of the EBP for 'typical' matrices with unit norm columns (the uniform spherical ensemble (USE)); Donoho showed that for such matrices Φ, the EBP is at least proportional to n. We distinguish three notions of breakdown point--global, local, and individual--and describe a semi-empirical heuristic for predicting the local EBP at this ensemble. Our heuristic identifies a configuration which can cause breakdown, and predicts the level of sparsity required to avoid that situation. In experiments, our heuristic provides upper and lower bounds bracketing the EBP for 'typical' matrices in the USE. For instance, for an n × m matrix Φn,m with m = 2n, our heuristic predicts breakdown of local equivalence when the coefficient vector α has about 30% nonzeros (relative to the reduced dimension n). This figure reliably describes the observed empirical behavior. A rough approximation to the observed breakdown point is provided by the simple formula 0.44 ċ n/log(2m/n).There are many matrix ensembles of interest outside the USE; our heuristic may be useful in speeding up empirical studies of breakdown point at such ensembles. Rather than solving numerous linear programming problems per n, m combination, at least several for each degree of sparsity, the heuristic suggests to conduct a few experiments to measure the driving term of the heuristic and derive predictive bounds. We tested the applicability of this heuristic to three special ensembles of matrices, including the partial Hadamard ensemble and the partial Fourier ensemble, and found that it accurately predicts the sparsity level at which local equivalence breakdown occurs, which is at a lower level than for the USE. A rough approximation to the prediction is provided by the simple formula 0.65 ċ n/log(1 + 10m/n).
Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing The replica method is a nonrigorous but well-known technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an $n$-dimensional vector “decouples” as $n$ scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdú. The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, least absolute shrinkage and selection operator (LASSO), linear estimation with thresholding, and zero norm-regularized estimation. In the case of LASSO estimation, the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation, it reduces to a hard threshold. Among other benefits, the replica method provides a computationally tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability.
Multi-criteria analysis for a maintenance management problem in an engine factory: rational choice The industrial organization needs to develop better methods for evaluating the performance of its projects. We are interested in the problems related to pieces with differing degrees of dirt. In this direction, we propose and evaluate a maintenance decision problem of maintenance in an engine factory that is specialized in the production, sale and maintenance of medium and slow speed four stroke engines. The main purpose of this paper is to study the problem by means of the analytic hierarchy process to obtain the weights of criteria, and the TOPSIS method as multicriteria decision making to obtain the ranking of alternatives, when the information was given in linguistic terms.
Overview of HEVC High-Level Syntax and Reference Picture Management The increasing proportion of video traffic in telecommunication networks puts an emphasis on efficient video compression technology. High Efficiency Video Coding (HEVC) is the forthcoming video coding standard that provides substantial bit rate reductions compared to its predecessors. In the HEVC standardization process, technologies such as picture partitioning, reference picture management, and parameter sets are categorized as “high-level syntax.” The design of the high-level syntax impacts the interface to systems and error resilience, and provides new functionalities. This paper presents an overview of the HEVC high-level syntax, including network abstraction layer unit headers, parameter sets, picture partitioning schemes, reference picture management, and supplemental enhancement information messages.
1.009575
0.015456
0.014286
0.011299
0.00732
0.007143
0.003142
0.000915
0.000078
0.000003
0
0
0
0
Guaranteed clustering and biclustering via semidefinite programming Identifying clusters of similar objects in data plays a significant role in a wide range of applications. As a model problem for clustering, we consider the densest $$k$$ k -disjoint-clique problem, whose goal is to identify the collection of $$k$$ k disjoint cliques of a given weighted complete graph maximizing the sum of the densities of the complete subgraphs induced by these cliques. In this paper, we establish conditions ensuring exact recovery of the densest $$k$$ k cliques of a given graph from the optimal solution of a particular semidefinite program. In particular, the semidefinite relaxation is exact for input graphs corresponding to data consisting of $$k$$ k large, distinct clusters and a smaller number of outliers. This approach also yields a semidefinite relaxation with similar recovery guarantees for the biclustering problem. Given a set of objects and a set of features exhibited by these objects, biclustering seeks to simultaneously group the objects and features according to their expression levels. This problem may be posed as that of partitioning the nodes of a weighted bipartite complete graph such that the sum of the densities of the resulting bipartite complete subgraphs is maximized. As in our analysis of the densest $$k$$ k -disjoint-clique problem, we show that the correct partition of the objects and features can be recovered from the optimal solution of a semidefinite program in the case that the given data consists of several disjoint sets of objects exhibiting similar features. Empirical evidence from numerical experiments supporting these theoretical guarantees is also provided.
Multireference alignment using semidefinite programming The multireference alignment problem consists of estimating a signal from multiple noisy shifted observations. Inspired by existing Unique-Games approximation algorithms, we provide a semidefinite program (SDP) based relaxation which approximates the maximum likelihood estimator (MLE) for the multireference alignment problem. Although we show this MLE problem is Unique-Games hard to approximate within any constant, we observe that our poly-time approximation algorithm for this problem appears to perform quite well in typical instances, outperforming existing methods. In an attempt to explain this behavior we provide stability guarantees for our SDP under a random noise model on the observations. This case is more challenging to analyze than traditional semi-random instances of Unique-Games: the noise model is on vertices of a graph and translates into dependent noise on the edges. Interestingly, we show that if certain positivity constraints in the relaxation are dropped, its solution becomes equivalent to performing phase correlation, a popular method used for pairwise alignment in imaging applications. Finally, we describe how symmetry reduction techniques from matrix representation theory can greatly decrease the computational cost of the SDP considered.
Relax, No Need to Round: Integrality of Clustering Formulations We study exact recovery conditions for convex relaxations of point cloud clustering problems, focusing on two of the most common optimization problems for unsupervised clustering: k-means and k-median clustering. Motivations for focusing on convex relaxations are: (a) they come with a certificate of optimality, and (b) they are generic tools which are relatively parameter-free, not tailored to specific assumptions over the input. More precisely, we consider the distributional setting where there are k clusters in Rm and data from each cluster consists of n points sampled from a symmetric distribution within a ball of unit radius. We ask: what is the minimal separation distance between cluster centers needed for convex relaxations to exactly recover these k clusters as the optimal integral solution? For the k-median linear programming relaxation we show a tight bound: exact recovery is obtained given arbitrarily small pairwise separation ε > O between the balls. In other words, the pairwise center separation is δ > 2+ε. Under the same distributional model, the k-means LP relaxation fails to recover such clusters at separation as large as δ = 4. Yet, if we enforce PSD constraints on the k-means LP, we get exact cluster recovery at separation as low as δ > min{2 + √2k/m}, 2+√2 + 2/m} + ε. In contrast, common heuristics such as Lloyd's algorithm (a.k.a. the k means algorithm) can fail to recover clusters in this setting; even with arbitrarily large cluster separation, k-means++ with overseeding by any constant factor fails with high probability at exact cluster recovery. To complement the theoretical analysis, we provide an experimental study of the recovery guarantees for these various methods, and discuss several open problems which these experiments suggest.
Nuclear norm minimization for the planted clique and biclique problems We consider the problems of finding a maximum clique in a graph and finding a maximum-edge biclique in a bipartite graph. Both problems are NP-hard. We write both problems as matrix-rank minimization and then relax them using the nuclear norm. This technique, which may be regarded as a generalization of compressive sensing, has recently been shown to be an effective way to solve rank optimization problems. In the special case that the input graph has a planted clique or biclique (i.e., a single large clique or biclique plus diversionary edges), our algorithm successfully provides an exact solution to the original instance. For each problem, we provide two analyses of when our algorithm succeeds. In the first analysis, the diversionary edges are placed by an adversary. In the second, they are placed at random. In the case of random edges for the planted clique problem, we obtain the same bound as Alon, Krivelevich and Sudakov as well as Feige and Krauthgamer, but we use different techniques.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems. We propose a new fast algorithm for solving one of the standard approaches to ill-posed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, nonsmoothness) preclude the use of off-the-shelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either total-variation or wavelet-based (or, more generally, frame-based) regularization. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the state-of-the-art.
An Interior-Point Method For Large-Scale L(1)-Regularized Least Squares Recently, a lot of attention has been paid to l(1) regularization based methods for sparse signal reconstruction (e.g., basis pursuit denoising and compressed sensing) and feature selection (e.g., the Lasso algorithm) in signal processing, statistics, and related fields. These problems can be cast as l(1)-regularized least-squares programs (LSPs), which can be reformulated as convex quadratic programs, and then solved by several standard methods such as interior-point methods, at least for small and medium size problems. In this paper, we describe a specialized interior-point method for solving large-scale, l(1)-regularized LSPs that uses the preconditioned conjugate gradients algorithm to compute the search direction. The interior-point method can solve large sparse problems, with a million variables and observations, in a few tens of minutes on a PC. It can efficiently solve large dense problems, that arise in sparse signal recovery with orthogonal transforms, by exploiting fast algorithms for these transforms. The method is illustrated on a magnetic resonance imaging data set.
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques, and reconstruc- tion accuracy of the same order as that of LP optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean squared error of the reconstruction is upper bounded by constant multiples of the measurement and signal perturbation energies.
Sampling Moments and Reconstructing Signals of Finite Rate of Innovation: Shannon Meets Strang–Fix Consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, nonuniform splines or piecewise polynomials, and call the number of degrees of freedom per unit of time the rate of innovation. Classical sampling theory does not enable a perfect reconstruction of such signals since they are not bandlimited. Recently, it was shown that, by using an adequate sampling kernel and a sampling rate greater or equal to the rate of innovation, it is possible to reconstruct such signals uniquely . These sampling schemes, however, use kernels with infinite support, and this leads to complex and potentially unstable reconstruction algorithms. In this paper, we show that many signals with a finite rate of innovation can be sampled and perfectly reconstructed using physically realizable kernels of compact support and a local reconstruction algorithm. The class of kernels that we can use is very rich and includes functions satisfying Strang-Fix conditions, exponential splines and functions with rational Fourier transform. This last class of kernels is quite general and includes, for instance, any linear electric circuit. We, thus, show with an example how to estimate a signal of finite rate of innovation at the output of an RC circuit. The case of noisy measurements is also analyzed, and we present a novel algorithm that reduces the effect of noise by oversampling
Efficient approximation of random fields for numerical applications This article is dedicated to the rapid computation of separable expansions for the approximation of random fields. We consider approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. Especially, we provide an a posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples are provided to validate and quantify the presented methods. Copyright (c) 2015 John Wiley & Sons, Ltd.
Generalizing the Dempster-Schafer theory to fuzzy sets With the desire to manage imprecise and vague information in evidential reasoning, several attempts have been made to generalize the Dempster–Shafer (D–S) theory to deal with fuzzy sets. However, the important principle of the D–S theory, that the belief and plausibility functions are treated as lower and upper probabilities, is no longer preserved in these generalizations. A generalization of the D–S theory in which this principle is maintained is described. It is shown that computing the degree of belief in a hypothesis in the D–S theory can be formulated as an optimization problem. The extended belief function is thus obtained by generalizing the objective function and the constraints of the optimization problem. To combine bodies of evidence that may contain vague information, Dempster’s rule is extended by 1) combining generalized compatibility relations based on the possibility theory, and 2) normalizing combination results to account for partially conflicting evidence. Our generalization not only extends the application of the D–S theory but also illustrates a way that probability theory and fuzzy set theory can be integrated in a sound manner in order to deal with different kinds of uncertain information in intelligent systems
Future Multimedia Networking, Second International Workshop, FMN 2009, Coimbra, Portugal, June 22-23, 2009. Proceedings
Automatic discovery of algorithms for multi-agent systems Automatic algorithm generation for large-scale distributed systems is one of the holy grails of artificial intelligence and agent-based modeling. It has direct applicability in future engineered (embedded) systems, such as mesh networks of sensors and actuators where there is a high need to harness their capabilities via algorithms that have good scalability characteristics. NetLogo has been extensively used as a teaching and research tool by computer scientists, for example for exploring distributed algorithms. Inventing such an algorithm usually involves a tedious reasoning process for each individual idea. In this paper, we report preliminary results in our effort to push the boundary of the discovery process even further, by replacing the classical approach with a guided search strategy that makes use of genetic programming targeting the NetLogo simulator. The effort moves from a manual model implementation to an automated discovery process. The only activity that is required is the implementation of primitives and the configuration of the tool-chain. In this paper, we explore the capabilities of our framework by re-inventing five well-known distributed algorithms.
Subjective Quality Metric For 3d Video Services Three-dimensional (3D) video service is expected to be introduced as a next-generation television service. Stereoscopic video is composed of two 2D video signals for the left and right views, and these 2D video signals are encoded. Video quality between the left and right views is not always consistent because, for example, each view is encoded at a different bit rate. As a result, the video quality difference between the left and right views degrades the quality of stereoscopic video. However, these characteristics have not been thoroughly studied or modeled. Therefore, it is necessary to better understand how the video quality difference affects stereoscopic video quality and to model the video quality characteristics. To do that, we conducted subjective quality assessments to derive subjective video quality characteristics. The characteristics showed that 3D video quality was affected by the difference in video quality between the left and right views, and that when the difference was small, 3D video quality correlated with the highest 2D video quality of the two views. We modeled these characteristics as a subjective quality metric using a training data set. Finally, we verified the performance of our proposed model by applying it to unknown data sets.
1.249984
0.249984
0.249984
0.024992
0.002272
0.000065
0.000009
0
0
0
0
0
0
0
Enabling High-Dimensional Hierarchical Uncertainty Quantification by ANOVA and Tensor-Train Decomposition Hierarchical uncertainty quantification can reduce the computational cost of stochastic circuit simulation by employing spectral methods at different levels. This paper presents an efficient framework to simulate hierarchically some challenging stochastic circuits/systems that include high-dimensional subsystems. Due to the high parameter dimensionality, it is challenging to both extract surrogate models at the low level of the design hierarchy and to handle them in the high-level simulation. In this paper, we develop an efficient analysis of variance-based stochastic circuit/microelectromechanical systems simulator to efficiently extract the surrogate models at the low level. In order to avoid the curse of dimensionality, we employ tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points. As a demonstration, we verify our algorithm on a stochastic oscillator with four MEMS capacitors and 184 random parameters. This challenging example is efficiently simulated by our simulator at the cost of only 10min in MATLAB on a regular personal computer.
Probabilistic Power Flow Computation via Low-Rank and Sparse Tensor Recovery This paper presents a tensor-recovery method to solve probabilistic power flow problems. Our approach generates a high-dimensional and sparse generalized polynomial-chaos expansion that provides useful statistical information. The result can also speed up other essential routines in power systems (e.g., stochastic planning, operations and controls). Instead of simulating a power flow equation at all quadrature points, our approach only simulates an extremely small subset of samples. We suggest a model to exploit the underlying low-rank and sparse structure of high-dimensional simulation data arrays, making our technique applicable to power systems with many random parameters. We also present a numerical method to solve the resulting nonlinear optimization problem. Our algorithm is implemented in MATLAB and is verified by several benchmarks in MATPOWER $5.1$. Accurate results are obtained for power systems with up to $50$ independent random parameters, with a speedup factor up to $9\times 10^{20}$.
A low-rank approach to the computation of path integrals We present a method for solving the reaction–diffusion equation with general potential in free space. It is based on the approximation of the Feynman–Kac formula by a sequence of convolutions on sequentially diminishing grids. For computation of the convolutions we propose a fast algorithm based on the low-rank approximation of the Hankel matrices. The algorithm has complexity of O(nrMlog⁡M+nr2M) flops and requires O(Mr) floating-point numbers in memory, where n is the dimension of the integral, r≪n, and M is the mesh size in one dimension. The presented technique can be generalized to the higher-order diffusion processes.
Joint sizing and adaptive independent gate control for FinFET circuits operating in multiple voltage regimes using the logical effort method FinFET has been proposed as an alternative for bulk CMOS in current and future technology nodes due to more effective channel control, reduced random dopant fluctuation, high ON/OFF current ratio, lower energy consumption, etc. Key characteristics of FinFET operating in the sub/near-threshold region are very different from those in the strong-inversion region. This paper first introduces an analytical transregional FinFET model with high accuracy in both sub- and near-threshold regimes. Next, the paper extends the well-known and widely-adopted logical effort delay calculation and optimization method to FinFET circuits operating in multiple voltage (sub/near/super-threshold) regimes. More specifically, a joint optimization of gate sizing and adaptive independent gate control is presented and solved in order to minimize the delay of FinFET circuits operating in multiple voltage regimes. Experimental results on a 32nm Predictive Technology Model for FinFET demonstrate the effectiveness of the proposed logical effort-based delay optimization framework.
Macromodel Generation for BioMEMS Components Using a Stabilized Balanced Truncation Plus Trajectory Piecewise-Linear Approach In this paper, we present a technique for automatically extracting nonlinear macromodels of biomedical microelectromechanical systems devices from physical simulation. The technique is a modification of the recently developed trajectory piecewise-linear approach, but uses ideas from balanced truncation to produce much lower order and more accurate models. The key result is a perturbation analysis of an instability problem with the reduction algorithm, and a simple modification that makes the algorithm more robust. Results are presented from examples to demonstrate dramatic improvements in reduced model accuracy and show the limitations of the method.
Low-Rank Tensor Networks for Dimensionality Reduction and Large-Scale Optimization Problems: Perspectives and Challenges PART 1.
Tensor Decompositions for Signal Processing Applications: From two-way to multiway component analysis. The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train.
Fast Variational Analysis of On-Chip Power Grids by Stochastic Extended Krylov Subspace Method This paper proposes a novel stochastic method for analyzing the voltage drop variations of on-chip power grid networks, considering lognormal leakage current variations. The new method, called StoEKS, applies Hermite polynomial chaos to represent the random variables in both power grid networks and input leakage currents. However, different from the existing orthogonal polynomial-based stochastic simulation method, extended Krylov subspace (EKS) method is employed to compute variational responses from the augmented matrices consisting of the coefficients of Hermite polynomials. Our contribution lies in the acceleration of the spectral stochastic method using the EKS method to fast solve the variational circuit equations for the first time. By using the reduction technique, the new method partially mitigates increased circuit-size problem associated with the augmented matrices from the Galerkin-based spectral stochastic method. Experimental results show that the proposed method is about two-order magnitude faster than the existing Hermite PC-based simulation method and many order of magnitudes faster than Monte Carlo methods with marginal errors. StoEKS is scalable for analyzing much larger circuits than the existing Hermit PC-based methods.
Virtual probe: a statistically optimal framework for minimum-cost silicon characterization of nanoscale integrated circuits In this paper, we propose a new technique, referred to as virtual probe (VP), to efficiently measure, characterize and monitor both inter-die and spatially-correlated intra-die variations in nanoscale manufacturing process. VP exploits recent breakthroughs in compressed sensing [15]--[17] to accurately predict spatial variations from an exceptionally small set of measurement data, thereby reducing the cost of silicon characterization. By exploring the underlying sparse structure in (spatial) frequency domain, VP achieves substantially lower sampling frequency than the well-known (spatial) Nyquist rate. In addition, VP is formulated as a linear programming problem and, therefore, can be solved both robustly and efficiently. Our industrial measurement data demonstrate that by testing the delay of just 50 chips on a wafer, VP accurately predicts the delay of the other 219 chips on the same wafer. In this example, VP reduces the estimation error by up to 10x compared to other traditional methods.
Model Order Reduction of Parameterized Interconnect Networks via a Two-Directional Arnoldi Process This paper presents a multiparameter moment-matching-based model order reduction technique for parameterized interconnect networks via a novel two-directional Arnoldi process (TAP). It is referred to as a Parameterized Interconnect Macromodeling via a TAP (PIMTAP) algorithm. PIMTAP inherits the advantages of previous multiparameter moment-matching algorithms and avoids their shortfalls. It is numerically stable and adaptive. PIMTAP model yields the same form of the original state equations and preserves the passivity of parameterized RLC networks like the well-known method passive reduced-order interconnect macromodeling algorithm for nonparameterized RLC networks.
Theory and Implementation of an Analog-to-Information Converter using Random Demodulation The new theory of compressive sensing enables direct analog-to-information conversion of compressible signals at sub-Nyquist acquisition rates. The authors develop new theory, algorithms, performance bounds, and a prototype implementation for an analog-to-information converter based on random demodulation. The architecture is particularly apropos for wideband signals that are sparse in the time-frequency plane. End-to-end simulations of a complete transistor-level implementation prove the concept under the effect of circuit nonidealities.
Pattern codification strategies in structured light systems Coded structured light is considered one of the most reliable techniques for recovering the surface of objects. This technique is based on projecting a light pattern and viewing the illuminated scene from one or more points of view. Since the pattern is coded, correspondences between image points and points of the projected pattern can be easily found. The decoded points can be triangulated and 3D information is obtained. We present an overview of the existing techniques, as well as a new and definitive classification of patterns for structured light sensors. We have implemented a set of representative techniques in this field and present some comparative results. The advantages and constraints of the different patterns are also discussed.
Compressed sensing with probabilistic measurements: a group testing solution Detection of defective members of large populations has been widely studied in the statistics community under the name ¿group testing¿, a problem which dates back to World War II when it was suggested for syphilis screening. There, the main interest is to identify a small number of infected people among a large population using collective samples. In viral epidemics, one way to acquire collective samples is by sending agents inside the population. While in classical group testing, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in this work we assume that the decoder possesses only partial knowledge about the sampling process. This assumption is justified by observing the fact that in a viral sickness, there is a chance that an agent remains healthy despite having contact with an infected person. Therefore, the reconstruction method has to cope with two different types of uncertainty; namely, identification of the infected population and the partially unknown sampling procedure. In this work, by using a natural probabilistic model for ¿viral infections¿, we design non-adaptive sampling procedures that allow successful identification of the infected population with overwhelming probability 1 - o(1). We propose both probabilistic and explicit design procedures that require a ¿small¿ number of agents to single out the infected individuals. More precisely, for a contamination probability p, the number of agents required by the probabilistic and explicit designs for identification of up to k infected members is bounded by m = O(k2(log n)/p2) and m = O(k2 (log2 n)/p2), respectively. In both cases, a simple decoder is able to successfully identify the infected population in time O(mn).
Bounding the Dynamic Behavior of an Uncertain System via Polynomial Chaos-based Simulation Parametric uncertainty can represent parametric tolerance, parameter noise or parameter disturbances. The effects of these uncertainties on the time evolution of a system can be extremely significant, mostly when studying closed-loop operation of control systems. The presence of uncertainty makes the modeling process challenging, since it is impossible to express the behavior of the system with a deterministic approach. If the uncertainties can be defined in terms of probability density function, probabilistic approaches can be adopted. In many cases, the most useful aspect is the evaluation of the worst-case scenario, thus limiting the problem to the evaluation of the boundary of the set of solutions. This is particularly true for the analysis of robust stability and performance of a closed-loop system. The goal of this paper is to demonstrate how the polynomial chaos theory (PCT) can simplify the determination of the worst-case scenario, quickly providing the boundaries in time domain. The proposed approach is documented with examples and with the description of the Maple worksheet developed by the authors for the automatic processing in the PCT framework.
1.015902
0.019752
0.017143
0.015886
0.015175
0.014286
0.007588
0.003256
0.000401
0.000044
0
0
0
0
Robust Analog/RF Circuit Design With Projection-Based Performance Modeling In this paper, a robust analog design (ROAD) tool for post-tuning (i.e., locally optimizing) analog/RF circuits is proposed. Starting from an initial design derived from hand analysis or analog circuit optimization based on simplified models, ROAD extracts accurate performance models via transistor-level simulation and iteratively improves the circuit performance by a sequence of geometric programming steps. Importantly, ROAD sets up all design constraints to include large-scale process and environmental variations, thereby facilitating the tradeoff between yield and performance. A crucial component of ROAD is a novel projection-based scheme for quadratic (both polynomial and posynomial) performance modeling, which allows our approach to scale well to large problem sizes. A key feature of this projection-based scheme is a new implicit power iteration algorithm to find the optimal projection space and extract the unknown model coefficients with robust convergence. The efficacy of ROAD is demonstrated on several circuit examples
Efficient parametric yield extraction for multiple correlated non-normal performance distributions of Analog/RF circuits In this paper we propose an efficient numerical algorithm to estimate the parametric yield of analog/RF circuits with consideration of large-scale process variations. Unlike many traditional approaches that assume Normal performance distributions, the proposed approach is especially developed to handle multiple correlated non-Normal performance distributions, thereby providing better accuracy than other traditional techniques. Starting from a set of quadratic performance models, the proposed parametric yield extraction conceptually maps multiple correlated performance constraints to a single auxiliary constraint using a MAX(·) operator. As such, the parametric yield is uniquely determined by the probability distribution of the auxiliary constraint and, therefore, can be easily computed. In addition, a novel second-order statistical Taylor expansion is proposed for an analytical MAX(·) approximation, facilitating fast yield estimation. Our numerical examples in a commercial BiCMOS process demonstrate that the proposed algorithm provides 2--3x error reduction compared with a Normal-distribution-based method, while achieving orders of magnitude more efficiency than the Monte Carlo analysis with 104 samples.
Phase Noise and Noise Induced Frequency Shift in Stochastic Nonlinear Oscillators Phase noise plays an important role in the performances of electronic oscillators. Traditional approaches describe the phase noise problem as a purely diffusive process. In this paper we develop a novel phase model reduction technique for phase noise analysis of nonlinear oscillators subject to stochastic inputs. We obtain analytical equations for both the phase deviation and the probability density function of the phase deviation. We show that, in general, the phase reduced models include non Markovian terms. Under the Markovian assumption, we demonstrate that the effect of white noise is to generate both phase diffusion and a frequency shift, i.e. phase noise is best described as a convection-diffusion process. The analysis of a solvable model shows the accuracy of our theory, and that it gives better predictions than traditional phase models.
Fast 3-D Thermal Simulation for Integrated Circuits With Domain Decomposition Method For accurate thermal simulation of integrated circuits (ICs), heat sink components in chip package must be considered. In this letter, techniques based on the domain decomposition method (DDM) are presented for the 3-D thermal simulation of nonrectangular IC thermal model including heat sink and heat spreader. A relaxed nonoverlapping DDM algorithm is employed to convert the problem to subproblems on rectangular subdomains. Then, a nonconformal discretization strategy is proposed to reduce the problem complexity with negligible error. Numerical experiments on several 2-D and 3-D IC test cases demonstrate that the relaxed nonoverlapping DDM is faster than the other preconditioned conjugate gradient algorithms with same mesh grid. The nonconformal discretization achieves further $10\\times$ reduction of runtime and memory usage.
STORM: A nonlinear model order reduction method via symmetric tensor decomposition Nonlinear model order reduction has always been a challenging but important task in various science and engineering fields. In this paper, a novel symmetric tensor-based order-reduction method (STORM) is presented for simulating large-scale nonlinear systems. The multidimensional data structure of symmetric tensors, as the higher order generalization of symmetric matrices, is utilized for the effective capture of high-order nonlinearities and efficient generation of compact models. Compared to the recent tensor-based nonlinear model order reduction (TNMOR) algorithm [1], STORM shows advantages in two aspects. First, STORM avoids the assumption of the existence of a low-rank tensor approximation. Second, with the use of the symmetric tensor decomposition, STORM allows significantly faster computation and less storage complexity than TNMOR. Numerical experiments demonstrate the superior computational efficiency and accuracy of STORM against existing nonlinear model order reduction methods.
Stable Reduced Models for Nonlinear Descriptor Systems Through Piecewise-Linear Approximation and Projection This paper presents theoretical and practical results concerning the stability of piecewise-linear (PWL) reduced models for the purposes of analog macromodeling. Results include proofs of input-output (I/O) stability for PWL approximations to certain classes of nonlinear descriptor systems, along with projection techniques that are guaranteed to preserve I/O stability in reduced-order PWL models. We also derive a new PWL formulation and introduce a new nonlinear projection, allowing us to extend our stability results to a broader class of nonlinear systems described by models containing nonlinear descriptor functions. Lastly, we present algorithms to compute efficiently the required stabilizing nonlinear left-projection matrix operators.
A convex programming approach for generating guaranteed passive approximations to tabulated frequency-data In this paper, we present a methodology for generating guaranteed passive time-domain models of subsystems described by tabulated frequency-domain data obtained through measurement or through physical simulation. Such descriptions are commonly used to represent on- and off-chip interconnect effects, package parasitics, and passive devices common in high-frequency integrated circuit applications. The approach, which incorporates passivity constraints via convex optimization algorithms, is guaranteed to produce a passive-system model that is optimal in the sense of having minimum error in the frequency band of interest over all models with a prescribed set of system poles. We demonstrate that this algorithm is computationally practical for generating accurate high-order models of data sets representing realistic, complicated multiinput, multioutput systems.
Enabling High-Dimensional Hierarchical Uncertainty Quantification by ANOVA and Tensor-Train Decomposition Hierarchical uncertainty quantification can reduce the computational cost of stochastic circuit simulation by employing spectral methods at different levels. This paper presents an efficient framework to simulate hierarchically some challenging stochastic circuits/systems that include high-dimensional subsystems. Due to the high parameter dimensionality, it is challenging to both extract surrogate models at the low level of the design hierarchy and to handle them in the high-level simulation. In this paper, we develop an efficient analysis of variance-based stochastic circuit/microelectromechanical systems simulator to efficiently extract the surrogate models at the low level. In order to avoid the curse of dimensionality, we employ tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points. As a demonstration, we verify our algorithm on a stochastic oscillator with four MEMS capacitors and 184 random parameters. This challenging example is efficiently simulated by our simulator at the cost of only 10min in MATLAB on a regular personal computer.
Nonparametric multivariate density estimation: a comparative study The paper algorithmically and empirically studies two major types of nonparametric multivariate density estimation techniques, where no assumption is made about the data being drawn from any of known parametric families of distribution. The first type is the popular kernel method (and several of its variants) which uses locally tuned radial basis (e.g., Gaussian) functions to interpolate the multidimensional density; the second type is based on an exploratory projection pursuit technique which interprets the multidimensional density through the construction of several 1D densities along highly “interesting” projections of multidimensional data. Performance evaluations using training data from mixture Gaussian and mixture Cauchy densities are presented. The results show that the curse of dimensionality and the sensitivity of control parameters have a much more adverse impact on the kernel density estimators than on the projection pursuit density estimators
Statistical design and optimization of SRAM cell for yield enhancement We have analyzed and modeled the failure probabilities of SRAM cells due to process parameter variations. A method to predict the yield of a memory chip based on the cell failure probability is proposed. The developed method is used in an early stage of a design cycle to minimize memory failure probability by statistically sizing of SRAM cell.
MapReduce: simplified data processing on large clusters MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.
Systemunterstützt individualisierte Kundenansprache in der Mehrkanalwelt der Finanzdienstleistungsbranche - Repräsentation der Einstellungen von Kunden in einem Kundenmodell
Parallel Opportunistic Routing in Wireless Networks We study benefits of opportunistic routing in a large wireless ad hoc network by examining how the power, delay, and total throughput scale as the number of source–destination pairs increases up to the operating maximum. Our opportunistic routing is novel in a sense that it is massively parallel, i.e., it is performed by many nodes simultaneously to maximize the opportunistic gain while controlling the interuser interference. The scaling behavior of conventional multihop transmission that does not employ opportunistic routing is also examined for comparison. Our main results indicate that our opportunistic routing can exhibit a net improvement in overall power–delay tradeoff over the conventional routing by providing up to a logarithmic boost in the scaling law. Such a gain is possible since the receivers can tolerate more interference due to the increased received signal power provided by the multi user diversity gain, which means that having more simultaneous transmissions is possible.
A Machine Learning Approach to Personal Pronoun Resolution in Turkish.
1.016779
0.015019
0.014874
0.014874
0.014874
0.007437
0.004964
0.002109
0.000182
0.000023
0
0
0
0
New Null Space Results and Recovery Thresholds for Matrix Rank Minimization Nuclear norm minimization (NNM) has recently gained significant attention for its use in rank minimization problems. Similar to compressed sensing, using null space characterizations, recovery thresholds for NNM have been studied in \cite{arxiv,Recht_Xu_Hassibi}. However simulations show that the thresholds are far from optimal, especially in the low rank region. In this paper we apply the recent analysis of Stojnic for compressed sensing \cite{mihailo} to the null space conditions of NNM. The resulting thresholds are significantly better and in particular our weak threshold appears to match with simulation results. Further our curves suggest for any rank growing linearly with matrix size $n$ we need only three times of oversampling (the model complexity) for weak recovery. Similar to \cite{arxiv} we analyze the conditions for weak, sectional and strong thresholds. Additionally a separate analysis is given for special case of positive semidefinite matrices. We conclude by discussing simulation results and future research directions.
Exponential bounds implying construction of compressed sensing matrices, error-correcting codes, and neighborly polytopes by random sampling In "Counting faces of randomly projected polytopes when the projection radically lowers dimension" the authors proved an asymptotic sampling theorem for sparse signals, showing that n random measurements permit to reconstruct an N-vector having k nonzeros provided n 2 ċ k ċ log(N/n)(1+o(1)) reconstruction uses l1 minimization. They also proved an asymptotic rate theorem, showing existence of real error-correcting codes for messages of length N which can correct all possible k-element error patterns using just n generalized checksum bits, where n 2e ċ k ċ log(N/n)(1+o(1)) decoding uses l1 minimization. Both results require an asymptotic framework, with N growing large. For applications, on the other hand, we are concerned with specific triples k, n, N. We exhibit triples (k, n, N) for which Compressed Sensing Matrices and Real Error-Correcting Codes surely exist and can be obtained with high probability by random sampling. These derive from exponential bounds on the probability of drawing 'bad' matrices. The bounds give conditions effective at finite-N, and converging to the known sharp asymptotic conditions for large N. Compared to other finite-N bounds known to us, they are much stronger, and much more explicit. Our bounds derive from asymptotics in "Counting faces of randomly projected polytopes when the projection radically lowers dimension" counting the expected number of k-dimensional faces of the randomly projected simplex TN-1 and cross-polytope CN. We develop here finite-N bounds on the expected discrepancy between the number of k-faces of the projected polytope AQ and its generator Q, for Q = TN-1 and CN. Our bounds also imply existence of interesting geometric objects. Thus, we exhibit triples (k, n, N) for which polytopes with 2N vertices can be centrally k-neighborly.
Compressed Sensing with Coherent and Redundant Dictionaries This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an ℓ1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence restriction on the dictionary and our results may be the first of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of ℓ1-analysis for such problems.
Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting The problem of sparsity pattern or support set recovery refers to estimating the set of nonzero coefficients of an unknown vector beta* isin Ropfp based on a set of n noisy observations. It arises in a variety of settings, including subset selection in regression, graphical model selection, signal denoising, compressive sensing, and constructive approximation. The sample complexity of a given method for subset recovery refers to the scaling of the required sample size n as a function of the signal dimension p, sparsity index k (number of non-zeroes in beta*), as well as the minimum value betamin of beta* over its support and other parameters of measurement matrix. This paper studies the information-theoretic limits of sparsity recovery: in particular, for a noisy linear observation model based on random measurement matrices drawn from general Gaussian measurement matrices, we derive both a set of sufficient conditions for exact support recovery using an exhaustive search decoder, as well as a set of necessary conditions that any decoder, regardless of its computational complexity, must satisfy for exact support recovery. This analysis of fundamental limits complements our previous work on sharp thresholds for support set recovery over the same set of random measurement ensembles using the polynomial-time Lasso method (lscr1-constrained quadratic programming).
Uncertainty principles and ideal atomic decomposition Suppose a discrete-time signal S(t), 0&les;t<N, is a superposition of atoms taken from a combined time-frequency dictionary made of spike sequences 1{t=τ} and sinusoids exp{2πiwt/N}/√N. Can one recover, from knowledge of S alone, the precise collection of atoms going to make up S? Because every discrete-time signal can be represented as a superposition of spikes alone, or as a superposition of sinusoids alone, there is no unique way of writing S as a sum of spikes and sinusoids in general. We prove that if S is representable as a highly sparse superposition of atoms from this time-frequency dictionary, then there is only one such highly sparse representation of S, and it can be obtained by solving the convex optimization problem of minimizing the l1 norm of the coefficients among all decompositions. Here “highly sparse” means that Nt+Nw<√N/2 where Nt is the number of time atoms, Nw is the number of frequency atoms, and N is the length of the discrete-time signal. Underlying this result is a general l1 uncertainty principle which says that if two bases are mutually incoherent, no nonzero signal can have a sparse representation in both bases simultaneously. For the above setting, the bases are sinusoids and spikes, and mutual incoherence is measured in terms of the largest inner product between different basis elements. The uncertainty principle holds for a variety of interesting basis pairs, not just sinusoids and spikes. The results have idealized applications to band-limited approximation with gross errors, to error-correcting encryption, and to separation of uncoordinated sources. Related phenomena hold for functions of a real variable, with basis pairs such as sinusoids and wavelets, and for functions of two variables, with basis pairs such as wavelets and ridgelets. In these settings, if a function f is representable by a sufficiently sparse superposition of terms taken from both bases, then there is only one such sparse representation; it may be obtained by minimum l1 norm atomic decomposition. The condition “sufficiently sparse” becomes a multiscale condition; for example, that the number of wavelets at level j plus the number of sinusoids in the jth dyadic frequency band are together less than a constant times 2j/2
Variations, margins, and statistics Design margining is used to account for design uncertainties in the measurement of performance, and thereby ensures that actual manufactured parts will operate in within predicted bounds. As process and environmental variations become increasingly severe and complex in nanometer process technology, design margining overheads have increased correspondingly. This paper describes the types of process and environmental variations, their impact on performance, and the traditional design margining process used to account for these uncertainties. We consider statistical timing (SSTA) in the context of its ability to reduce timing margins through more accurate modeling of variations, and quantify potential benefits of SSTA for setup and hold time margin reduction. Combining SSTA with complementary techniques for systematic variation-aware and voltage-variation-aware timing provides meaningful design margin reduction. We introduce the concept of activity based operating condition as a supporting construct for variation-aware STA flows
NIST Net: a Linux-based network emulation tool Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and experimentation with network code through emulation. NIST Net enables experimenters to model and effect arbitrary performance dynamics (packet delay, jitter, bandwidth limitations, congestion, packet loss and duplication) on live IP packets passing through a commodity Linux-based PC router. We describe the emulation capabilities of NIST Net; examine its architecture; and discuss some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead. Calibration results are provided to quantify the fidelity and performance of NIST Net over a wide range of offered loads (up to 1 Gbps), and a diverse set of emulated performance dynamics.
Scale-Space and Edge Detection Using Anisotropic Diffusion A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the 'no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image.
On the quasi-Monte Carlo method with Halton points for elliptic PDEs with log-normal diffusion. This article is dedicated to the computation of the moments of the solution to elliptic partial differential equations with random, log-normally distributed diffusion coefficients by the quasi-Monte Carlo method. Our main result is that the convergence rate of the quasi-Monte Carlo method based on the Halton sequence for the moment computation depends only linearly on the dimensionality of the stochastic input parameters. In particular, we attain this rather mild dependence on the stochastic dimensionality without any randomization of the quasi-Monte Carlo method under consideration. For the proof of the main result, we require related regularity estimates for the solution and its powers. These estimates are also provided here. Numerical experiments are given to validate the theoretical findings.
A simple Cooperative diversity method based on network path selection Cooperative diversity has been recently proposed as a way to form virtual antenna arrays that provide dramatic gains in slow fading wireless environments. However, most of the proposed solutions require distributed space-time coding algorithms, the careful design of which is left for future investigation if there is more than one cooperative relay. We propose a novel scheme that alleviates these problems and provides diversity gains on the order of the number of relays in the network. Our scheme first selects the best relay from a set of M available relays and then uses this "best" relay for cooperation between the source and the destination. We develop and analyze a distributed method to select the best relay that requires no topology information and is based on local measurements of the instantaneous channel conditions. This method also requires no explicit communication among the relays. The success (or failure) to select the best available path depends on the statistics of the wireless channel, and a methodology to evaluate performance for any kind of wireless channel statistics, is provided. Information theoretic analysis of outage probability shows that our scheme achieves the same diversity-multiplexing tradeoff as achieved by more complex protocols, where coordination and distributed space-time coding for M relay nodes is required, such as those proposed by Laneman and Wornell (2003). The simplicity of the technique allows for immediate implementation in existing radio hardware and its adoption could provide for improved flexibility, reliability, and efficiency in future 4G wireless systems.
Is Gauss Quadrature Better than Clenshaw-Curtis? We compare the convergence behavior of Gauss quadrature with that of its younger brother, Clenshaw-Curtis. Seven-line MATLAB codes are presented that implement both methods, and experiments show that the supposed factor-of-2 advantage of Gauss quadrature is rarely realized. Theorems are given to explain this effect. First, following O'Hara and Smith in the 1960s, the phenomenon is explained as a consequence of aliasing of coefficients in Chebyshev expansions. Then another explanation is offered based on the interpretation of a quadrature formula as a rational approximation of $\log((z+1)/(z-1))$ in the complex plane. Gauss quadrature corresponds to Padé approximation at $z=\infty$. Clenshaw-Curtis quadrature corresponds to an approximation whose order of accuracy at $z=\infty$ is only half as high, but which is nevertheless equally accurate near $[-1,1]$.
Using trapezoids for representing granular objects: Applications to learning and OWA aggregation We discuss the role and benefits of using trapezoidal representations of granular information. We focus on the use of level sets as a tool for implementing many operations on trapezoidal sets. We point out the simplification that the linearity of the trapezoid brings by requiring us to perform operations on only two level sets. We investigate the classic learning algorithm in the case when our observations are granule objects represented as trapezoidal fuzzy sets. An important issue that arises is the adverse effect that very uncertain observations have on the quality of our estimates. We suggest an approach to addressing this problem using the specificity of the observations to control its effect. We next consider the OWA aggregation of information represented as trapezoids. An important problem that arises here is the ordering of the trapezoidal fuzzy sets needed for the OWA aggregation. We consider three approaches to accomplish this ordering based on the location, specificity and fuzziness of the trapezoids. From these three different approaches three fundamental methods of ordering are developed. One based on the mean of the 0.5 level sets, another based on the length of the 0.5 level sets and a third based on the difference in lengths of the core and support level sets. Throughout this work particular emphasis is placed on the simplicity of working with trapezoids while still retaining a rich representational capability.
Application of FMCDM model to selecting the hub location in the marine transportation: A case study in southeastern Asia Hub location selection problems have become one of the most popular and important issues not only in the truck transportation and the air transportation, but also in the marine transportation. The main focus of this paper is on container transshipment hub locations in southeastern Asia. Transshipment is the fastest growing segment of the containerport market, resulting in significant scope to develop new transshipment terminal capacity to cater for future expected traffic flows. A shipping carrier not only calculates transport distances and operation costs, but also evaluates some qualitative conditions for existing hub locations and then selects an optimal container transshipment hub location in the region. In this paper, a fuzzy multiple criteria decision-making (FMCDM) model is proposed for evaluating and selecting the container transshipment hub port. Finally, the utilization of the proposed FMCDM model is demonstrated with a case study of hub locations in southeastern Asia. The results show that the FMCDM model proposed in this paper can be used to explain the evaluation and decision-making procedures of hub location selection well. In addition, the preferences are calculated for existing hub locations and these are then compared with a new proposed container transshipment hub location in the region, in this instance the Port of Shanghai. Furthermore, a sensitivity analysis is performed.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.2
0.1
0.014286
0.008333
0.002198
0
0
0
0
0
0
0
0
0
Is QoE estimation based on QoS parameters sufficient for video quality assessment? Internet Service providers offer today a variety of of audio, video and data services. Traditional approaches for quality assessment of video services were based on Quality of Service (QoS) measurement. These measurements are considered as performance measurement at the network level. However, in order to make accurate quality assessment, the video must be assessed subjectively by the user. However, QoS parameters are easier to be obtained than the QoE subjective scores. Therefore, some recent works have investigated objective approaches to estimate QoE scores based on measured QoS parameters. The main purpose is the control of QoE based on QoS measurements. This paper presents several solutions and models presented in the literature. We discuss some other factors that must be considered in the mapping process between QoS and QoE. The impact of these factors on perceived QoE is verified through subjective tests.
An example of real time QoE IPTV service estimator This paper will consider an estimator which includes mathematical modelling of physical channel parameters as information carrier and the weakest links in the telecommunication chain of information transfer. It will also identify necessary physical layer parameters which influence the quality of multimedia service delivery or QoE (Quality of Experience). With the modelling of the above mentioned parameters, the relation between degradations will be defined which appear in the channel between the user and the central telecommunication equipment with domination of one media used for information transfer with certain error probability. Degradations in a physical channel can be noticed by observing the change in values of channel transfer function or the appearance of increased noise. Estimation of QoE IPTV (Internet Protocol Television) service is especially necessary during delivery of real time service. In that case the mentioned degradations may appear in any moment and cause a packet loss.
The Impact Of Interactivity On The Qoe: A Preliminary Analysis The interactivity in multimedia services concerns the input/output process of the user with the system, as well as its cooperativity. It is an important element that affects the overall Quality of Experience (QoE), which may even mask the impact of the quality level of the (audio and visual) signal itself on the overall user perception. This work is a preliminary study aimed at evaluating the weight of the interactivity, which relies on subjective assessments that have been conducted varying the artefacts, genre and interactivity features on video streaming services evaluated by the subjects. Subjective evaluations have been collected from 25 subjects in compliance with ITU-T Recommendation P. 910 through single-stimulus Absolute Category Rating (ACR). It resulted that the impact of the interactivity is influenced by the presence of other components, such as presence of buffer starvations and type of content displayed. An objective quality metric able to measure the influence of the interactivity on the QoE has also been defined, which has proved to be highly correlated with subjective results. We concluded that the interactivity feature can be successfully represented by either an additive or a multiplicative component to be added in existing quality metrics.
Impact Of Mobile Devices And Usage Location On Perceived Multimedia Quality We explore the quality impact when audiovisual content is delivered to different mobile devices. Subjects were shown the same sequences on five different mobile devices and a broadcast quality television. Factors influencing quality ratings include video resolution, viewing distance, and monitor size. Analysis shows how subjects' perception of multimedia quality differs when content is viewed on different mobile devices. In addition, quality ratings from laboratory and simulated living room sessions were statistically equivalent.
Energy saving approaches for video streaming on smartphone based on QoE modeling In this paper, we study the influence of video stalling on QoE. We provide QoE models that are obtained in realistic scenarios on the smartphone, and provide energy-saving approaches for smartphone by leveraging the proposed QoE models in relation to energy. Results show that approximately 5J is saved in a 3 minutes video clip with an acceptable Mean Opinion Score (MOS) level when the video frames are skipped. If the video frames are not skipped, then it is suggested to avoid freezes during a video stream as the freezes highly increase the energy waste on the smartphones.
QoE in 10 seconds: Are short video clip lengths sufficient for Quality of Experience assessment? Standard methodologies for subjective video quality testing are based on very short test clips of 10 seconds. But is this duration sufficient for Quality of Experience assessment? In this paper, we present the results of a comparative user study that tests whether quality perception and rating behavior may be different if video clip durations are longer. We did not find strong overall MOS differences between clip durations, but the three longer clips (60, 120 and 240 seconds) were rated slightly more positively than the three shorter durations under comparison (10, 15 and 30 seconds). This difference was most apparent when high quality videos were presented. However, we did not find an interaction between content class and the duration effect itself. Furthermore, methodological implications of these results are discussed.
Survey And Challenges Of Qoe Management Issues In Wireless Networks With the move towards converged all-IP wireless network environments, managing end-user Quality of Experience (QoE) poses a challenging task, aimed at meeting high user expectations and requirements regarding reliable and cost-effective communication, access to any service, anytime and anywhere, and across multiple operator domains. In this paper, we give a survey of state-of-the-art research activities addressing the field of QoE management, focusing in particular on the domain of wireless networks and addressing three management aspects: QoE modeling, monitoring and measurement, and adaptation and optimization. Furthermore, we identify and discuss the key aspects and challenges that need to be considered when conducting research in this area.
A Novel Framework for Dynamic Utility-Based QoE Provisioning in Wireless Networks In this paper a novel framework for extending QoS to QoE in wireless networks is introduced. Instead of viewing QoE as an off-line apriori mapping between users' subjective perspective of their service quality and specific networking metrics, we treat QoE provisioning as a dynamic process that enables users to express their preference with respect to the instantaneous experience of their service performance, at the network's resource management mechanism. Specifically, we exploit network utility maximization (NUM) theory to efficiently correlate QoE and user-application interactions with the QoS-aware resource allocation process, through the dynamic adaptation of users' service-aware utility functions. The realization of the proposed approach in a CDMA cellular network supporting multimedia services is demonstrated and the achieved benefits from both end-users' and operators' point of view are discussed and evaluated.
Toward a Principled Framework to Design Dynamic Adaptive Streaming Algorithms over HTTP Client-side bitrate adaptation algorithms play a critical role in delivering a good quality of experience for Internet video. Many studies have shown that current solutions perform suboptimally, and despite the proliferation of several proposals in this space, both from commercial providers and researchers, there is still a distinct lack of clarity and consensus w.r.t. several natural questions: (1) What objectives does/should such an algorithm optimize? (2) What environment signals such as buffer occupancy or throughput estimates should an algorithm use in its control loop? (3) How sensitive is an algorithm to operating conditions (e.g., bandwidth stability, buffer size, available bitrates)? This work attempts to bring clarity to this discussion by casting adaptive bitrate streaming as a model-based predictive control problem. We demonstrate the initial promise of shedding light on these questions using this control-theoretic abstraction.
Scale-Space and Edge Detection Using Anisotropic Diffusion A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the 'no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image.
Rough sets and vague concept approximation: from sample approximation to adaptive learning We present a rough set approach to vague concept approximation. Approximation spaces used for concept approximation have been initially defined on samples of objects (decision tables) representing partial information about concepts. Such approximation spaces defined on samples are next inductively extended on the whole object universe. This makes it possible to define the concept approximation on extensions of samples. We discuss the role of inductive extensions of approximation spaces in searching for concept approximation. However, searching for relevant inductive extensions of approximation spaces defined on samples is infeasible for compound concepts. We outline an approach making this searching feasible by using a concept ontology specified by domain knowledge and its approximation. We also extend this approach to a framework for adaptive approximation of vague concepts by agents interacting with environments. This paper realizes a step toward approximate reasoning in multiagent systems (MAS), intelligent systems, and complex dynamic systems (CAS).
Resilient Peer-to-Peer Streaming We consider the problem of distributing "live" streaming media content to a potentially large and highly dynamic population of hosts. Peer-to-peer content distribution is attractive in this setting because the bandwidth available to serve content scales with demand. A key challenge, however, is making content distribution robust to peer transience. Our approach to providing robustness is to introduce redundancy, both in network paths and in data. We use multiple, diverse distribution trees to provide redundancy in network paths and multiple description coding (MDC) to provide redundancy in data.We present a simple tree management algorithm that provides the necessary path diversity and describe an adaptation framework for MDC based on scalable receiver feedback. We evaluate these using MDC applied to real video data coupled with real usage traces from a major news site that experienced a large flash crowd for live streaming content. Our results show very significant benefits in using multiple distribution trees and MDC, with a 22 dB improvement in PSNR in some cases.
A method for multiple attribute decision making with incomplete weight information under uncertain linguistic environment The multi-attribute decision making problems are studied, in which the information about the attribute values take the form of uncertain linguistic variables. The concept of deviation degree between uncertain linguistic variables is defined, and ideal point of uncertain linguistic decision making matrix is also defined. A formula of possibility degree for the comparison between uncertain linguistic variables is proposed. Based on the deviation degree and ideal point of uncertain linguistic variables, an optimization model is established, by solving the model, a simple and exact formula is derived to determine the attribute weights where the information about the attribute weights is completely unknown. For the information about the attribute weights is partly known, another optimization model is established to determine the weights, and then to aggregate the given uncertain linguistic decision information, respectively. A method based on possibility degree is given to rank the alternatives. Finally, an illustrative example is also given.
Interval-Based Models for Decision Problems Uncertainty in decision problems has been handled by probabilities with respect to unknown state of nature such as demands in market having several scenarios. Standard decision theory can not deal with non-stochastic uncertainty, indeterminacy and ignorance of the given phenomenon. Also, probability needs many data under the same situation. Recently, economical situation changes rapidly so that it is hard to collect many data under the same situation. Therefore instead of conventional ways, interval-based models for decision problems are explained as dual models in this paper. First, interval regression models are described as a kind of decision problems. Then, using interval regression analysis, interval weights in AHP (Analytic Hierarchy Process) can be obtained to reflect intuitive judgments given by an estimator. This approach is called interval AHP where the normality condition of interval weights is used. This normality condition can be regarded as interval probabilities. Thus, finally some basic definitions of interval probability in decision problems are shown in this paper.
1.06914
0.070053
0.070053
0.03535
0.035027
0.011861
0.002171
0.000549
0.00011
0
0
0
0
0
Highly connected multicoloured subgraphs of multicoloured graphs Suppose the edges of the complete graph on n vertices, E(K"n), are coloured using r colours; how large a k-connected subgraph are we guaranteed to find, which uses only at most s of the colours? This question is due to Bollobas, and the case s=1 was considered in Liu et al. [Highly connected monochromatic subgraphs of multicoloured graphs, J. Graph Theory, to appear]. Here we shall consider the case s>=2, proving in particular that when s=2 and r+1 is a power of 2 then the answer lies between 4n/(r+1)-17kr(r+2k+1) and 4n/(r+1)+4, that if r=2s+1 then the answer lies between (1-1/rs)n-7rsk and (1-1/rs)n+1, and that phase transitions occur at [email protected]?r/[email protected]? and [email protected](r). We shall also mention some of the more glaring open problems relating to this question.
Highly connected monochromatic subgraphs We conjecture that for n>4(k-1) every 2-coloring of the edges of the complete graph K"n contains a k-connected monochromatic subgraph with at least n-2(k-1) vertices. This conjecture, if true, is best possible. Here we prove it for k=2, and show how to reduce it to the case n<7k-6. We prove the following result as well: for n>16k every 2-colored K"n contains a k-connected monochromatic subgraph with at least n-12k vertices.
Maximum degree and fractional matchings in uniform hypergraphs Let ℋ be a family ofr-subsets of a finite setX. SetD(ℋ)= $$\mathop {\max }\limits_{x \in X} $$ |{E:x∈E∈ℋ}|, (maximum degree). We say that ℋ is intersecting if for anyH,H′ ∈ ℋ we haveH ∩H′ ≠ 0. In this case, obviously,D(ℋ)≧|ℋ|/r. According to a well-known conjectureD(ℋ)≧|ℋ|/(r−1+1/r). We prove a slightly stronger result. Let ℋ be anr-uniform, intersecting hypergraph. Then either it is a projective plane of orderr−1, consequentlyD(ℋ)=|ℋ|/(r−1+1/r), orD(ℋ)≧|ℋ|/(r−1). This is a corollary to a more general theorem on not necessarily intersecting hypergraphs.
Vector Representation of Graph Domination We study a function on graphs, denoted by “Gamma”, representing vectorially the domination number of a graph, in a way similar to that in which the Lovsz Theta function represents the independence number of a graph. This function is a lower bound on the homological connectivity of the independence complex of the graph, and hence is of value in studying matching problems by topological methods. Not much is known at present about the Gamma function, in particular, there is no known procedure for its computation for general graphs. In this article we compute the precise value of Gamma for trees and cycles, and to achieve this we prove new lower and upper bounds on Gamma, formulated in terms of known domination and algebraic parameters of the graph. We also use the Gamma function to prove a fractional version of a strengthening of Ryser's conjecture. © 2011 Wiley Periodicals, Inc. J Graph Theory © 2012 Wiley Periodicals, Inc.
A Comment on Ryser’s Conjecture for Intersecting Hypergraphs Let $$\tau({\mathcal{H}})$$be the cover number and $$\nu({\mathcal{H}})$$be the matching number of a hypergraph $${\mathcal{H}}$$. Ryser conjectured that every r-partite hypergraph $${\mathcal{H}}$$satisfies the inequality $$\tau({\mathcal{H}}) \leq (r-1) \nu ({\mathcal{H}})$$. This conjecture is open for all r ≥ 4. For intersecting hypergraphs, namely those with $$\nu({\mathcal{H}}) = 1$$, Ryser’s conjecture reduces to $$\tau({\mathcal{H}}) \leq r-1$$. Even this conjecture is extremely difficult and is open for all r ≥ 6. For infinitely many r there are examples of intersecting r-partite hypergraphs with $$\tau({\mathcal{H}}) = r-1$$, demonstrating the tightness of the conjecture for such r. However, all previously known constructions are not optimal as they use far too many edges. How sparse can an intersecting r-partite hypergraph be, given that its cover number is as large as possible, namely $$\tau({\mathcal{H}}) \ge r-1$$? In this paper we solve this question for r ≤ 5, give an almost optimal construction for r = 6, prove that any r-partite intersecting hypergraph with τ(H) ≥ r − 1 must have at least $$(3-\frac{1}{\sqrt{18}})r(1-o(1)) \approx 2.764r(1-o(1))$$edges, and conjecture that there exist constructions with Θ(r) edges.
On the fractional covering number of hypergraphs Thefractional covering number r* of a hypergraphH (V, E) is defined to be the minimum
Fuzzy Sets
Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with know locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing.
On the construction of sparse tensor product spaces. Let Omega(1) subset of R-n1 and Omega(2) subset of R-n2 be two given domains and consider on each domain a multiscale sequence of ansatz spaces of polynomial exactness r(1) and r(2), respectively. In this paper, we study the optimal construction of sparse tensor products made from these spaces. In particular, we derive the resulting cost complexities to approximate functions with anisotropic and isotropic smoothness on the tensor product domain Omega(1) x Omega(2). Numerical results validate our theoretical findings.
Exact Matrix Completion via Convex Optimization We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys $$m\ge C\,n^{1.2}r\log n$$ for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.
Restricted Eigenvalue Properties for Correlated Gaussian Designs Methods based on l1-relaxation, such as basis pursuit and the Lasso, are very popular for sparse regression in high dimensions. The conditions for success of these methods are now well-understood: (1) exact recovery in the noiseless setting is possible if and only if the design matrix X satisfies the restricted nullspace property, and (2) the squared l2-error of a Lasso estimate decays at the minimax optimal rate k log p / n, where k is the sparsity of the p-dimensional regression problem with additive Gaussian noise, whenever the design satisfies a restricted eigenvalue condition. The key issue is thus to determine when the design matrix X satisfies these desirable properties. Thus far, there have been numerous results showing that the restricted isometry property, which implies both the restricted nullspace and eigenvalue conditions, is satisfied when all entries of X are independent and identically distributed (i.i.d.), or the rows are unitary. This paper proves directly that the restricted nullspace and eigenvalue conditions hold with high probability for quite general classes of Gaussian matrices for which the predictors may be highly dependent, and hence restricted isometry conditions can be violated with high probability. In this way, our results extend the attractive theoretical guarantees on l1-relaxations to a much broader class of problems than the case of completely independent or unitary designs.
User impatience and network performance In this work, we analyze from passive measurements the correlations between the user-induced interruptions of TCP connections and different end-to-end performance metrics. The aim of this study is to assess the possibility for a network operator to take into account the customers' experience for network monitoring. We first observe that the usual connection-level performance metrics of the interrupted connections are not very different, and sometimes better than those of normal connections. However, the request-level performance metrics show stronger correlations between the interruption rates and the network quality-of-service. Furthermore, we show that the user impatience could also be used to characterize the relative sensitivity of data applications to various network performance metrics.
Fuzzy concepts and formal methods: some illustrative examples It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However, some system problems, particularly those drawn from the information systems (IS) problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper identifies the problem types of interest and argues that they are characterised by uncertainty and imprecision. It suggests fuzzy set theory as a useful formalism for modelling aspects of this imprecision. The paper illustrates how a fuzzy logic toolkit for Z can be applied to such problem domains. Several examples are presented illustrating the representation of imprecise concepts as fuzzy sets and relations, and soft pre-conditions and system requirements as a series of linguistically quantified propositions.
Subjective Quality Metric For 3d Video Services Three-dimensional (3D) video service is expected to be introduced as a next-generation television service. Stereoscopic video is composed of two 2D video signals for the left and right views, and these 2D video signals are encoded. Video quality between the left and right views is not always consistent because, for example, each view is encoded at a different bit rate. As a result, the video quality difference between the left and right views degrades the quality of stereoscopic video. However, these characteristics have not been thoroughly studied or modeled. Therefore, it is necessary to better understand how the video quality difference affects stereoscopic video quality and to model the video quality characteristics. To do that, we conducted subjective quality assessments to derive subjective video quality characteristics. The characteristics showed that 3D video quality was affected by the difference in video quality between the left and right views, and that when the difference was small, 3D video quality correlated with the highest 2D video quality of the two views. We modeled these characteristics as a subjective quality metric using a training data set. Finally, we verified the performance of our proposed model by applying it to unknown data sets.
1.214286
0.107143
0.005558
0.003333
0.001111
0.000002
0
0
0
0
0
0
0
0
Augmented Vision And Quality Of Experience Assessment: Towards A Unified Evaluation Framework New display modalities in forthcoming media consumption scenarios require a realignment of currently employed Quality of Experience evaluation frameworks for these novel settings. We consider commercially available optical see-through devices, typically employed by operators in augmented vision or augmented reality scenarios. Based on current multimedia evaluation frameworks, we extrapolate onto the additional environmental challenges provided by the overlay of media content with real-world backgrounds. We derive an overall framework of configurations and metrics that should be part of subjective quality assessment studies and be incorporated into future databases to provide a high quality ground truth foundation for long-term applicability. We present an exemplary experimental setup of a pilot study with related components currently in use to perform human subject experiments for this domain.
Towards Predictions of the Image Quality of Experience for Augmented Reality Scenarios. Augmented Reality (AR) devices are commonly head-worn to overlay context-dependent information into the field of view of the device operators. One particular scenario is the overlay of still images, either in a traditional fashion, or as spherical, i.e., immersive, content. For both media types, we evaluate the interplay of user ratings as Quality of Experience (QoE) with (i) the non-referential BRISQUE objective image quality metric and (ii) human subject dry electrode EEG signals gathered with a commercial device. Additionally, we employ basic machine learning approaches to assess the possibility of QoE predictions based on rudimentary subject data. Corroborating prior research for the overall scenario, we find strong correlations for both approaches with user ratings as Mean Opinion Scores, which we consider as QoE metric. In prediction scenarios based on data subsets, we find good performance for the objective metric as well as the EEG-based approach. While the objective metric can yield high QoE prediction accuracies overall, it is limited i its application for individual subjects. The subject-based EEG approach, on the other hand, enables good predictability of the QoE for both media types, but with better performance for regular content. Our results can be employed in practical scenarios by content and network service providers to optimize the user experience in augmented reality scenarios.
Visual User Experience Difference: Image Compression Impacts On The Quality Of Experience In Augmented Binocular Vision With vision augmentation entering consumer application scenarios with a wide range of adaptation possibilities, an understanding of the interplay of Quality of Service (QoS) factors and resulting device operator Quality of Experience (QoE) becomes increasingly significant. We evaluate the effects of image compression as QoS factor on the QoE by describing the difference between traditional opaque and highly transparent vision augmenting display scenarios, which we denote as Visual User Experience Difference (VUED).We find that based on mean opinion scores, higher ratings are attained in augmented settings only for higher qualities, while lower qualities exhibit a reverse trend. Furthermore, we present a quantified relationship between traditional and augmented vision for a set of common images for the first time. The differential mean opinion score in the vision augmenting setting is additionally compared to major objective image quality metrics and embedded into current theories for mapping QoS to QoE, for which we find and describe suitable fits.
Visual Interface Evaluation for Wearables Datasets: Predicting the Subjective Augmented Vision Image QoE and QoS. As Augmented Reality (AR) applications become commonplace, the determination of a device operator's subjective Quality of Experience (QoE) in addition to objective Quality of Service (QoS) metrics gains importance. Human subject experimentation is common for QoE relationship determinations due to the subjective nature of the QoE. In AR scenarios, the overlay of displayed content with the real world adds to the complexity. We employ Electroencephalography (EEG) measurements as the solution to the inherent subjectivity and situationality of AR content display overlaid with the real world. Specifically, we evaluate prediction performance for traditional image display (AR) and spherical/immersive image display (SAR) for the QoE and underlying QoS levels. Our approach utilizing a four-position EEG wearable achieves high levels of accuracy. Our detailed evaluation of the available data indicates that less sensors would perform almost as well and could be integrated into future wearable devices. Additionally, we make our Visual Interface Evaluation for Wearables (VIEW) datasets from human subject experimentation publicly available and describe their utilization.
Ant colony optimization for QoE-centric flow routing in software-defined networks We present design, implementation, and an evaluation of an ant colony optimization (ACO) approach to flow routing in software-defined networking (SDN) environments. While exploiting a global network view and configuration flexibility provided by SDN, the approach also utilizes quality of experience (QoE) estimation models and seeks to maximize the user QoE for multimedia services. As network metrics (e.g., packet loss) influence QoE for such services differently, based on the service type and its integral media flows, the goal of our ACO-based heuristic algorithm is to calculate QoE-aware paths that conform to traffic demands and network limitations. A Java implementation of the algorithm is integrated into SDN controller OpenDaylight so as to program the path selections. The evaluation results indicate promising QoE improvements of our approach over shortest path routing, as well as low running time.
Objective Video Quality Assessment Methods: A Classification, Review, and Performance Comparison With the increasing demand for video-based applications, the reliable prediction of video quality has increased in importance. Numerous video quality assessment methods and metrics have been proposed over the past years with varying computational complexity and accuracy. In this paper, we introduce a classification scheme for full-reference and reduced-reference media-layer objective video quality assessment methods. Our classification scheme first classifies a method according to whether natural visual characteristics or perceptual (human visual system) characteristics are considered. We further subclassify natural visual characteristics methods into methods based on natural visual statistics or natural visual features. We subclassify perceptual characteristics methods into frequency or pixel-domain methods. According to our classification scheme, we comprehensively review and compare the media-layer objective video quality models for both standard resolution and high definition video. We find that the natural visual statistics based MultiScale-Structural SIMilarity index (MS-SSIM), the natural visual feature based Video Quality Metric (VQM), and the perceptual spatio-temporal frequency-domain based MOtion-based Video Integrity Evaluation (MOVIE) index give the best performance for the LIVE Video Quality Database.
Video-QoE aware resource management at network core We address the problem of video-aware multiuser resource management in modern wireless networks such as 3GPP Long Term Evolution (LTE) in the context of video delivery systems like HTTP Adaptive Streaming (HAS). HAS is a client driven video rate adaptation and delivery framework that is becoming popular due to its inherent advantages over existing video delivery solutions. Quality of Experience (QoE) is the prime performance criterion for adaptive video streaming and wireless resource management is a critical part in providing a target QoE for video delivery over wireless systems. However modifying resource management functionality to be video-aware at the network edge is rather difficult in practical networks. In this paper, we propose an alternative architecture to enhance video QoE in QoS (Quality of Service)-aware networks wherein the intelligence for video aware resource management resides at the network-core rather than at the network edge. In this new architecture, a "Video Aware Controller" (VAC) is placed at the network core. The VAC periodically receives HAS-related feedback from adaptive streaming clients/servers which it converts to QoS parameters for each user. Further, we propose an algorithm to dynamically compute the Maximum Bit Rate (MBR) for each streaming user based on media buffer feedback. Our simulation results on an LTE system level simulator demonstrate significant reduction in re-buffering percentage and enhanced QoE-outage capacity compared to existing schemes.
Anticipatory Buffer Control and Quality Selection for Wireless Video Streaming. Video streaming is in high demand by mobile users, as recent studies indicate. In cellular networks, however, the unreliable wireless channel leads to two major problems. Poor channel states degrade video quality and interrupt the playback when a user cannot sufficiently fill its local playout buffer: buffer underruns occur. In contrast to that, good channel conditions cause common greedy buffering schemes to pile up very long buffers. Such over-buffering wastes expensive wireless channel capacity. To keep buffering in balance, we employ a novel approach. Assuming that we can predict data rates, we plan the quality and download time of the video segments ahead. This anticipatory scheduling avoids buffer underruns by downloading a large number of segments before a channel outage occurs, without wasting wireless capacity by excessive buffering. We formalize this approach as an optimization problem and derive practical heuristics for segmented video streaming protocols (e.g., HLS or MPEG DASH). Simulation results and testbed measurements show that our solution essentially eliminates playback interruptions without significantly decreasing video quality.
Passive Estimation of Quality of Experience Quality of Experience (QoE) is a promising method to take into account the users' needs in designing, monitoring and managing networks. However, there is a challenge in finding a quick and simple way to estimate the QoE due to the diversity of needs, habits and customs. We propose a new empirical method to approximate it automatically from passive network measurements and we compare its pros and cons with usual techniques. We apply it, as an example, on ADSL traffic traces to estimate the QoE dependence on the loss rate for the most used applications. We analyze more precisely the correlations between packet losses and some traffic characteristics of TCP connections, the duration, the sizes and the inter-arrival. We define different thresholds on the loss rate for network management. And we propose a notion of sensitiveness to compare these correlations on different applications.
The variety generated by the truth value algebra of type-2 fuzzy sets This paper addresses some questions about the variety generated by the algebra of truth values of type-2 fuzzy sets. Its principal result is that this variety is generated by a finite algebra, and in particular is locally finite. This provides an algorithm for determining when an equation holds in this variety. It also sheds light on the question of determining an equational axiomatization of this variety, although this problem remains open.
Incremental refinement of image salient-point detection. Low-level image analysis systems typically detect "points of interest", i.e., areas of natural images that contain corners or edges. Most of the robust and computationally efficient detectors proposed for this task use the autocorrelation matrix of the localized image derivatives. Although the performance of such detectors and their suitability for particular applications has been studied in relevant literature, their behavior under limited input source (image) precision or limited computational or energy resources is largely unknown. All existing frameworks assume that the input image is readily available for processing and that sufficient computational and energy resources exist for the completion of the result. Nevertheless, recent advances in incremental image sensors or compressed sensing, as well as the demand for low-complexity scene analysis in sensor networks now challenge these assumptions. In this paper, we investigate an approach to compute salient points of images incrementally, i.e., the salient point detector can operate with a coarsely quantized input image representation and successively refine the result (the derived salient points) as the image precision is successively refined by the sensor. This has the advantage that the image sensing and the salient point detection can be terminated at any input image precision (e.g., bound set by the sensory equipment or by computation, or by the salient point accuracy required by the application) and the obtained salient points under this precision are readily available. We focus on the popular detector proposed by Harris and Stephens and demonstrate how such an approach can operate when the image samples are refined in a bitwise manner, i.e., the image bitplanes are received one-by-one from the image sensor. We estimate the required energy for image sensing as well as the computation required for the salient point detection based on stochastic source modeling. The computation and energy required by the proposed incremental refinement approach is compared against the conventional salient-point detector realization that operates directly on each source precision and cannot refine the result. Our experiments demonstrate the feasibility of incremental approaches for salient point detection in various classes of natural images. In addition, a first comparison between the results obtained by the intermediate detectors is presented and a novel application for adaptive low-energy image sensing based on points of saliency is presented.
POPFNN-AAR(S): a pseudo outer-product based fuzzy neural network A novel fuzzy neural network, the pseudo outer-product-based fuzzy neural network using the singleton fuzzifier together with the approximate analogical reasoning schema, is proposed in this paper. The network is referred to as the singleton fuzzifier POPFNN-AARS, the singleton fuzzifier POPFNN-AARS employs the approximate analogical reasoning schema (AARS) instead of the commonly used truth value restriction (TVR) method. This makes the structure and learning algorithms of the singleton fuzzifier POPFNN-AARS simple and conceptually clearer than those of the POPFNN-TVR model. Different similarity measures (SM) and modification functions (FM) for AARS are investigated. The structures and learning algorithms of the proposed singleton fuzzifer POPFNN-AARS are presented. Several sets of real-life data are used to test the performance of the singleton fuzzifier POPFNN-AARS and their experimental results are presented for detailed discussion
A parallel image encryption method based on compressive sensing Recently, compressive sensing-based encryption methods which combine sampling, compression and encryption together have been proposed. However, since the quantized measurement data obtained from linear dimension reduction projection directly serve as the encrypted image, the existing compressive sensing-based encryption methods fail to resist against the chosen-plaintext attack. To enhance the security, a block cipher structure consisting of scrambling, mixing, S-box and chaotic lattice XOR is designed to further encrypt the quantized measurement data. In particular, the proposed method works efficiently in the parallel computing environment. Moreover, a communication unit exchanges data among the multiple processors without collision. This collision-free property is equivalent to optimal diffusion. The experimental results demonstrate that the proposed encryption method not only achieves the remarkable confusion, diffusion and sensitivity but also outperforms the existing parallel image encryption methods with respect to the compressibility and the encryption speed.
The performance evaluation of a spectrum sensing implementation using an automatic modulation classification detection method with a Universal Software Radio Peripheral Based on the inherent capability of automatic modulation classification (AMC), a new spectrum sensing method is proposed in this paper that can detect all forms of primary users' signals in a cognitive radio environment. The study presented in this paper focuses on the sensing of some combined analog and digitally primary modulated signals. In achieving this objective, a combined analog and digital automatic modulation classifier was developed using an artificial neural network (ANN). The ANN classifier was combined with a GNU Radio and Universal Software Radio Peripheral version 2 (USRP2) to develop the Cognitive Radio Engine (CRE) for detecting primary users' signals in a cognitive radio environment. The detailed information on the development and performance of the CRE are presented in this paper. The performance evaluation of the developed CRE shows that the engine can reliably detect all the primary modulated signals considered. Comparative performance evaluation carried out on the detection method presented in this paper shows that the proposed detection method performs favorably against the energy detection method currently acclaimed the best detection method. The study results reveal that a single detection method that can reliably detect all forms of primary radio signals in a cognitive radio environment, can only be developed if a feature common to all radio signals is used in its development rather than using features that are peculiar to certain signal types only.
1.06007
0.060537
0.047516
0.025556
0.006667
0.00199
0.000722
0.000139
0.000011
0
0
0
0
0