Query Text
stringlengths
10
59.9k
Ranking 1
stringlengths
10
4.53k
Ranking 2
stringlengths
10
50.9k
Ranking 3
stringlengths
10
6.78k
Ranking 4
stringlengths
10
59.9k
Ranking 5
stringlengths
10
6.78k
Ranking 6
stringlengths
10
59.9k
Ranking 7
stringlengths
10
59.9k
Ranking 8
stringlengths
10
6.78k
Ranking 9
stringlengths
10
59.9k
Ranking 10
stringlengths
10
50.9k
Ranking 11
stringlengths
13
6.78k
Ranking 12
stringlengths
14
50.9k
Ranking 13
stringlengths
24
2.74k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.24
score_4
float64
0
0.24
score_5
float64
0
0.24
score_6
float64
0
0.21
score_7
float64
0
0.07
score_8
float64
0
0.03
score_9
float64
0
0.01
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Quadrant of euphoria: a crowdsourcing platform for QoE assessment Existing quality of experience assessment methods, subjective or objective, suffer from either or both problems of inaccurate experiment tools and expensive personnel cost. The panacea for them, as we have come to realize, lies in the joint application of paired comparison and crowdsourcing, the latter being a Web 2.0 practice of organizations asking ordinary unspecific Internet users to carry out internal tasks. We present in this article Quadrant of Euphoria, a user-friendly Web-based platform facilitating QoE assessments in network and multimedia studies, which features low cost, participant diversity, meaningful and interpretable QoE scores, subject consistency assurance, and a burdenless experiment process.
Queuing based optimal scheduling mechanism for QoE provisioning in cognitive radio relaying network In cognitive radio network (CRN), secondary users (SU) can share the licensed spectrum with the primary users (PU). Compared with the traditional network, spectrum utilization in CRN will be greatly improved. In order to ensure the performance of SUs as well as PU, wireless relaying can be employed to improve the system capacity. Meanwhile, quality-of-experience (QoE) should be considered and provisioned in the relay scheduling scheme to ensure user experience and comprehensive network performance. In this paper, we studied a QoE provisioning mechanism for a queuing based optimal relay scheduling problem in CRN. We designed a QoE provisioning scheme with multiple optimized goals about higher capacity and lower packet loss probability. The simulation results showed that our mechanism could get a much better performance on packet loss with suboptimum system capacity. And it indicated that our mechanism could guarantee a better user experience through the specific QoS-QoE mapping models. So our mechanism can improve the network performance and user experience comprehensively.
Mobile quality of experience: Recent advances and challenges Quality of Experience (QoE) is important from both a user perspective, since it assesses the quality a user actually experiences, and a network perspective, since it is important for a provider to dimension its network to support the necessary QoE. This paper presents some recent advances on the modeling and measurement of QoE with an emphasis on mobile networks. It also identifies key challenges for mobile QoE.
Personalized user engagement modeling for mobile videos. The ever-increasing mobile video services and users’ demand for better video quality have boosted research into the video Quality-of-Experience. Recently, the concept of Quality-of-Experience has evolved to Quality-of-Engagement, a more actionable metric to evaluate users’ engagement to the video services and directly relate to the service providers’ revenue model. Existing works on user engagement mostly adopt uniform models to quantify the engagement level of all users, overlooking the essential distinction of individual users. In this paper, we first conduct a large-scale measurement study on a real-world data set to demonstrate the dramatic discrepancy in user engagement, which implies that a uniform model is not expressive enough to characterize the distinctive engagement pattern of each user. To address this problem, we propose PE, a personalized user engagement model for mobile videos, which, for the first time, addresses the user diversity in the engagement modeling. Evaluation results on a real-world data set show that our system significantly outperforms the uniform engagement models, with a 19.14% performance gain.
QoE-based transport optimization for video delivery over next generation cellular networks Video streaming is considered as one of the most important and challenging applications for next generation cellular networks. Current infrastructures are not prepared to deal with the increasing amount of video traffic. The current Internet, and in particular the mobile Internet, was not designed with video requirements in mind and, as a consequence, its architecture is very inefficient for handling video traffic. Enhancements are needed to cater for improved Quality of Experience (QoE) and improved reliability in a mobile network. In this paper we design a novel dynamic transport architecture for next generation mobile networks adapted to video service requirements. Its main novelty is the transport optimization of video delivery that is achieved through a QoE oriented redesign of networking mechanisms as well as the integration of Content Delivery Networks (CDN) techniques.
Guest Editorial QoE-Aware Wireless Multimedia Systems. The 11 papers in this special issue cover a range of topics and can be logically organized in three groups, focusing on QoE-aware media protection, QoE assessment and modelling, and multi-user-QoE management.
The user in experimental computer systems research Experimental computer systems research typically ignores the end-user, modeling him, if at all, in overly simple ways. We argue that this (1) results in inadequate performance evaluation of the systems, and (2) ignores opportunities. We summarize our experiences with (a) directly evaluating user satisfaction and (b) incorporating user feedback in different areas of client/server computing, and use our experiences to motivate principles for that domain. Specifically, we report on user studies to measure user satisfaction with resource borrowing and with different clock frequencies in desktop computing, the development and evaluation of user interfaces to integrate user feedback into scheduling and clock frequency decisions in this context, and results in predicting user action and system response in a remote display system. We also present initial results on extending our work to user control of scheduling and mapping of virtual machines in a virtualization-based distributed computing environment. We then generalize (a) and (b) as recommendations for incorporating the user into experimental computer systems research.
Quality of experience management in mobile cellular networks: key issues and design challenges. Telecom operators have recently faced the need for a radical shift from technical quality requirements to customer experience guarantees. This trend has emerged due to the constantly increasing amount of mobile devices and applications and the explosion of overall traffic demand, forming a new era: “the rise of the consumer”. New terms have been coined in order to quantify, manage, and improve the...
Impact Of Mobile Devices And Usage Location On Perceived Multimedia Quality We explore the quality impact when audiovisual content is delivered to different mobile devices. Subjects were shown the same sequences on five different mobile devices and a broadcast quality television. Factors influencing quality ratings include video resolution, viewing distance, and monitor size. Analysis shows how subjects' perception of multimedia quality differs when content is viewed on different mobile devices. In addition, quality ratings from laboratory and simulated living room sessions were statistically equivalent.
MIMO technologies in 3GPP LTE and LTE-advanced 3rd Generation Partnership Project (3GPP) has recently completed the specification of the Long Term Evolution (LTE) standard. Majority of the world's operators and vendors are already committed to LTE deployments and developments, making LTE the market leader in the upcoming evolution to 4G wireless communication systems. Multiple input multiple output (MIMO) technologies introduced in LTE such as spatial multiplexing, transmit diversity, and beamforming are key components for providing higher peak rate at a better system efficiency, which are essential for supporting future broadband data service over wireless links. Further extension of LTE MIMO technologies is being studied under the 3GPP study item "LTE-Advanced" to meet the requirement of IMT-Advanced set by International Telecommunication Union Radiocommunication Sector (ITU-R). In this paper, we introduce various MIMO technologies employed in LTE and provide a brief overview on the MIMO technologies currently discussed in the LTE-Advanced forum.
The price of privacy and the limits of LP decoding This work is at theintersection of two lines of research. One line, initiated by Dinurand Nissim, investigates the price, in accuracy, of protecting privacy in a statistical database. The second, growing from an extensive literature on compressed sensing (see in particular the work of Donoho and collaborators [4,7,13,11])and explicitly connected to error-correcting codes by Candès and Tao ([4]; see also [5,3]), is in the use of linearprogramming for error correction. Our principal result is the discovery of a sharp threshhold ρ*∠ 0.239, so that if ρ A is a random m x n encoding matrix of independently chosen standardGaussians, where m = O(n), then with overwhelming probability overchoice of A, for all x ∈ Rn, LP decoding corrects ⌊ ρ m⌋ arbitrary errors in the encoding Ax, while decoding can be made to fail if the error rate exceeds ρ*. Our boundresolves an open question of Candès, Rudelson, Tao, and Vershyin [3] and (oddly, but explicably) refutesempirical conclusions of Donoho [11] and Candès et al [3]. By scaling and rounding we can easilytransform these results to obtain polynomial-time decodable random linear codes with polynomial-sized alphabets tolerating any ρ In the context of privacy-preserving datamining our results say thatany privacy mechanism, interactive or non-interactive, providingreasonably accurate answers to a 0.761 fraction of randomly generated weighted subset sum queries, and arbitrary answers on the remaining 0.239 fraction, is blatantly non-private.
Parameterized interconnect order reduction with explicit-and-implicit multi-parameter moment matching for inter/intra-die variations In this paper we propose a novel parameterized interconnect order reduction algorithm, CORE, to efficiently capture both inter-die and intra-die variations. CORE applies a two-step explicit-and-implicit scheme for multiparameter moment matching. As such, CORE can match significantly more moments than other traditional techniques using the same model size. In addition, a recursive Arnoldi algorithm is proposed to quickly construct the Krylov subspace that is required for parameterized order reduction. Applying the recursive Arnoldi algorithm significantly reduces the computation cost for model generation. Several RC and RLC interconnect examples demonstrate that CORE can provide up to 10× better modeling accuracy than other traditional techniques, while achieving smaller model complexity (i.e. size). It follows that these interconnect models generated by CORE can provide more accurate simulation result with cheaper simulation cost, when they are utilized for gate-interconnect co-simulation.
Properties of Interval-Valued Fuzzy Relations, Atanassov's Operators and Decomposable Operations In this paper we study properties of interval-valued fuzzy relations which were introduced by L.A. Zadeh in 1975. Fuzzy set theory turned out to be a useful tool to describe situations in which the data are imprecise or vague. Interval-valued fuzzy set theory is a generalization of fuzzy set theory which was introduced also by Zadeh in 1965. We examine some properties of interval-valued fuzzy relations in the context of Atanassov's operators and decomposable operations in interval-valued fuzzy set theory.
Total variation minimization with separable sensing operator Compressed Imaging is the theory that studies the problem of image recovery from an under-determined system of linear measurements. One of the most popular methods in this field is Total Variation (TV) Minimization, known for accuracy and computational efficiency. This paper applies a recently developed Separable Sensing Operator approach to TV Minimization, using the Split Bregman framework as the optimization approach. The internal cycle of the algorithm is performed by efficiently solving coupled Sylvester equations rather than by an iterative optimization procedure as it is done conventionally. Such an approach requires less computer memory and computational time than any other algorithm published to date. Numerical simulations show the improved -- by an order of magnitude or more -- time vs. image quality compared to two conventional algorithms.
1.018056
0.020435
0.020435
0.020435
0.017354
0.011802
0.008505
0.002684
0.000186
0.000007
0
0
0
0
Hedges: A study in meaning criteria and the logic of fuzzy concepts
The Vienna Definition Language
General formulation of formal grammars By extracting the basic properties common to the formal grammars appeared in existing literatures, we develop a general formulation of formal grammars. We define a pseudo grammar and derive from it the well-known probabilistic, fuzzy grammars and so on. Moreover, several interesting grammars such as ⊔∗ grammars, ⊔ ⊓ grammars, ⊔ ⊓ grammars, composite B-fuzzy grammars, and mixed fuzzy grammars, which have never appeared in any other papers before, are derived.
Matrix Equations and Normal Forms for Context-Free Grammars The relationship between the set of productions of a context-free grammar and the corresponding set of defining equations is first pointed out. The closure operation on a matrix of strings is defined and this concept is used to formalize the solution to a set of linear equations. A procedure is then given for rewriting a context-free grammar in Greibach normal form, where the replacements string of each production begins with a terminal symbol. An additional procedure is given for rewriting the grammar so that each replacement string both begins and ends with a terminal symbol. Neither procedure requires the evaluation of regular begins and ends with a terminal symbol. Neither procedure requires the evaluation of regular expressions over the total vocabulary of the grammar, as is required by Greibach's procedure.
A Note on Fuzzy Sets
Fuzzy modifiers based on fuzzy relations In this paper we introduce a new type of fuzzy modifiers (i.e. mappings that transform a fuzzy set into a modified fuzzy set) based on fuzzy relations. We show how they can be applied for the representation of weakening adverbs (more or less, roughly) and intensifying adverbs (very, extremely) in the inclusive and the non-inclusive interpretation. We illustrate their use in an approximate reasoning scheme.
Linguistic description of the human gait quality The human gait is a complex phenomenon that is repeated in time following an approximated pattern. Using a three-axial accelerometer fixed in the waist, we can obtain a temporal series of measures that contains a numerical description of this phenomenon. Nevertheless, even when we represent graphically these data, it is difficult to interpret them due to the complexity of the phenomenon and the huge amount of available data. This paper describes our research on designing a computational system able to generate linguistic descriptions of this type of quasi-periodic complex phenomena. We used our previous work on both, Granular Linguistic Models of Phenomena and Fuzzy Finite State Machines, to create a basic linguistic model of the human gait. We have used this model to generate a human friendly linguistic description of this phenomenon focused on the assessment of the gait quality. We include a practical application where we analyze the gait quality of healthy individuals and people with lesions in their limbs.
COR: a methodology to improve ad hoc data-driven linguistic rule learning methods by inducing cooperation among rules This paper introduces a new learning methodology to quickly generate accurate and simple linguistic fuzzy models: the cooperative rules (COR) methodology. It acts on the consequents of the fuzzy rules to find those that are best cooperating. Instead of selecting the consequent with the highest performance in each fuzzy input subspace, as ad-hoc data-driven methods usually do, the COR methodology considers the possibility of using another consequent, different from the best one, when it allows the fuzzy model to be more accurate thanks to having a rule set with the best cooperation. Our proposal has shown good results in solving three different applications when compared to other methods.
Contrast of a fuzzy relation In this paper we address a key problem in many fields: how a structured data set can be analyzed in order to take into account the neighborhood of each individual datum. We propose representing the dataset as a fuzzy relation, associating a membership degree with each element of the relation. We then introduce the concept of interval-contrast, a means of aggregating information contained in the immediate neighborhood of each element of the fuzzy relation. The interval-contrast measures the range of membership degrees present in each neighborhood. We use interval-contrasts to define the necessary properties of a contrast measure, construct several different local contrast and total contrast measures that satisfy these properties, and compare our expressions to other definitions of contrast appearing in the literature. Our theoretical results can be applied to several different fields. In an Appendix A, we apply our contrast expressions to photographic images.
Extensions of the multicriteria analysis with pairwise comparison under a fuzzy environment Multicriteria decision-making (MCDM) problems often involve a complex decision process in which multiple requirements and fuzzy conditions have to be taken into consideration simultaneously. The existing approaches for solving this problem in a fuzzy environment are complex. Combining the concepts of grey relation and pairwise comparison, a new fuzzy MCDM method is proposed. First, the fuzzy analytic hierarchy process (AHP) is used to construct fuzzy weights of all criteria. Then, linguistic terms characterized by L–R triangular fuzzy numbers are used to denote the evaluation values of all alternatives versus subjective and objective criteria. Finally, the aggregation fuzzy assessments of different alternatives are ranked to determine the best selection. Furthermore, this paper uses a numerical example of location selection to demonstrate the applicability of the proposed method. The study results show that this method is an effective means for tackling MCDM problems in a fuzzy environment.
A fuzzy MCDM method for solving marine transshipment container port selection problems “Transshipment” is a very popular and important issue in the present international trade container transportation market. In order to reduce the international trade container transportation operation cost, it is very important for shipping companies to choose the best transshipment container port. The aim of this paper is to present a new Fuzzy Multiple Criteria Decision Making Method (FMCDM) for solving the transshipment container port selection problem under fuzzy environment. In this paper we present first the canonical representation of multiplication operation on three fuzzy numbers, and then this canonical representation is applied to the selection of transshipment container port. Based on the canonical representation, the decision maker of shipping company can determine quickly the ranking order of all candidate transshipment container ports and select easily the best one.
Sparse Reconstruction by Separable Approximation Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.
A fuzzy logic system for the detection and recognition of handwritten street numbers Fuzzy logic is applied to the problem of locating and reading street numbers in digital images of handwritten mail. A fuzzy rule-based system is defined that uses uncertain information provided by image processing and neural network-based character recognition modules to generate multiple hypotheses with associated confidence values for the location of the street number in an image of a handwritten address. The results of a blind test of the resultant system are presented to demonstrate the value of this new approach. The results are compared to those obtained using a neural network trained with backpropagation. The fuzzy logic system achieved higher performance rates
A possibilistic approach to the modeling and resolution of uncertain closed-loop logistics Closed-loop logistics planning is an important tactic for the achievement of sustainable development. However, the correlation among the demand, recovery, and landfilling makes the estimation of their rates uncertain and difficult. Although the fuzzy numbers can present such kinds of overlapping phenomena, the conventional method of defuzzification using level-cut methods could result in the loss of information. To retain complete information, the possibilistic approach is adopted to obtain the possibilistic mean and mean square imprecision index (MSII) of the shortage and surplus for uncertain factors. By applying the possibilistic approach, a multi-objective, closed-loop logistics model considering shortage and surplus is formulated. The two objectives are to reduce both the total cost and the root MSII. Then, a non-dominated solution can be obtained to support decisions with lower perturbation and cost. Also, the information on prediction interval can be obtained from the possibilistic mean and root MSII to support the decisions in the uncertain environment. This problem is non-deterministic polynomial-time hard, so a new algorithm based on the spanning tree-based genetic algorithm has been developed. Numerical experiments have shown that the proposed algorithm can yield comparatively efficient and accurate results.
1.016606
0.025015
0.025015
0.025015
0.010011
0.002848
0.000502
0.000028
0.000006
0.000003
0.000002
0
0
0
Linguistic Decision-Making Models Using linguistic values to assess results and information about external factors is quite usual in real decision situations. In this article we present a general model for such problems. Utilities are evaluated in a term set of labels and the information is supposed to be a linguistic evidence, that is, is to be represented by a basic assignment of probability (in the sense of Dempster-Shafer) but taking its values on a term set of linguistic likelihoods. Basic decision rules, based on fuzzy risk intervals, are developed and illustrated by several examples. The last section is devoted to analyzing the suitability of considering a hierarchical structure (represented by a tree) for the set of utility labels.
Multi-criteria analysis for a maintenance management problem in an engine factory: rational choice The industrial organization needs to develop better methods for evaluating the performance of its projects. We are interested in the problems related to pieces with differing degrees of dirt. In this direction, we propose and evaluate a maintenance decision problem of maintenance in an engine factory that is specialized in the production, sale and maintenance of medium and slow speed four stroke engines. The main purpose of this paper is to study the problem by means of the analytic hierarchy process to obtain the weights of criteria, and the TOPSIS method as multicriteria decision making to obtain the ranking of alternatives, when the information was given in linguistic terms.
Evaluating Government Websites Based On A Fuzzy Multiple Criteria Decision-Making Approach This paper presents a framework of website quality evaluation for measuring the performance of government websites. Multiple criteria decision-making (MCDM) is a widely used tool for evaluating and ranking problems containing multiple, usually conflicting criteria. In line with the multi-dimensional characteristics of website quality, MCDM provides an effective framework for an inter-websites comparison involving the evaluation of multiple attributes. It thus ranks different websites compared in terms of their overall performance. This paper models the inter-website comparison problem as an MCDM problem, and presents a practical and selective approach to deal with it. In addition, fuzzy logic is applied to the subjectivity and vagueness in the assessment process. The proposed framework is effectively illustrated to rate Turkish government websites.
Group decision making with linguistic preference relations with application to supplier selection Linguistic preference relation is a useful tool for expressing preferences of decision makers in group decision making according to linguistic scales. But in the real decision problems, there usually exist interactive phenomena among the preference of decision makers, which makes it difficult to aggregate preference information by conventional additive aggregation operators. Thus, to approximate the human subjective preference evaluation process, it would be more suitable to apply non-additive measures tool without assuming additivity and independence. In this paper, based on @l-fuzzy measure, we consider dependence among subjective preference of decision makers to develop some new linguistic aggregation operators such as linguistic ordered geometric averaging operator and extended linguistic Choquet integral operator to aggregate the multiplicative linguistic preference relations and additive linguistic preference relations, respectively. Further, the procedure and algorithm of group decision making based on these new linguistic aggregation operators and linguistic preference relations are given. Finally, a supplier selection example is provided to illustrate the developed approaches.
A hybrid multi-criteria decision-making model for firms competence evaluation In this paper, we present a hybrid multi-criteria decision-making (MCDM) model to evaluate the competence of the firms. According to the competence-based theory reveals that firm competencies are recognized from exclusive and unique capabilities that each firm enjoy in marketplace and are tightly intertwined within different business functions throughout the company. Therefore, competence in the firm is a composite of various attributes. Among them many intangible and tangible attributes are difficult to measure. In order to overcome the issue, we invite fuzzy set theory into the measurement of performance. In this paper first we calculate the weight of each criterion through adaptive analytic hierarchy process (AHP) approach (A^3) method, and then we appraise the performance of firms via linguistic variables which are expressed as trapezoidal fuzzy numbers. In the next step we transform these fuzzy numbers into interval data by means of @a-cut. Then considering different values for @a we rank the firms through TOPSIS method with interval data. Since there are different ranks for different @a values, we apply linear assignment method to obtain final rank for alternatives.
Fuzzy relational algebra for possibility-distribution-fuzzy-relational model of fuzzy data In the real world, there exist a lot of fuzzy data which cannot or need not be precisely defined. We distinguish two types of fuzziness: one in an attribute value itself and the other in an association of them. For such fuzzy data, we propose a possibility-distribution-fuzzy-relational model, in which fuzzy data are represented by fuzzy relations whose grades of membership and attribute values are possibility distributions. In this model, the former fuzziness is represented by a possibility distribution and the latter by a grade of membership. Relational algebra for the ordinary relational database as defined by Codd includes the traditional set operations and the special relational operations. These operations are classified into the primitive operations, namely, union, difference, extended Cartesian product, selection and projection, and the additional operations, namely, intersection, join, and division. We define the relational algebra for the possibility-distribution-fuzzy-relational model of fuzzy databases.
Approaches to manage hesitant fuzzy linguistic information based on the cosine distance and similarity measures for HFLTSs and their application in qualitative decision making We introduce a family of novel distance and similarity measures for HFLTSs.We develop a cosine-distance-based HFL-TOPSIS method.We develop a cosine-distance-based HFL-VIKOR method.We use a numerical example to illustrate the proposed methods. Qualitative and hesitant information is common in practical decision making process. In such complicated decision making problem, it is flexible for experts to use comparative linguistic expressions to express their opinions since the linguistic expressions are much closer than single or simple linguistic term to human way of thinking and cognition. The hesitant fuzzy linguistic term set (HFLTS) turns out to be a powerful tool in representing and eliciting the comparative linguistic expressions. In order to develop some approaches to decision making with hesitant fuzzy linguistic information, in this paper, we firstly introduce a family of novel distance and similarity measures for HFLTSs, such as the cosine distance and similarity measures, the weighted cosine distance and similarity measures, the order weighted cosine distance and similarity measures, and the continuous cosine distance and similarity measures. All these distance and similarity measures are proposed from the geometric point of view while the existing distance and similarity measures over HFLTSs are based on the different forms of algebra distance measures. Afterwards, based on the hesitant fuzzy linguistic cosine distance measures between hesitant fuzzy linguistic elements, the cosine-distance-based HFL-TOPSIS method and the cosine-distance-based HFL-VIKOR method are developed to dealing with hesitant fuzzy linguistic multiple criteria decision making problems. The step by step algorithms of these two methods are given for the convenience of applications. Finally, a numerical example concerning the selection of ERP systems is given to illustrate the validation and efficiency of the proposed methods.
Linguistic modeling by hierarchical systems of linguistic rules In this paper, we propose an approach to design linguistic models which are accurate to a high degree and may be suitably interpreted. This approach is based on the development of a hierarchical system of linguistic rules learning methodology. This methodology has been thought as a refinement of simple linguistic models which, preserving their descriptive power, introduces small changes to increase their accuracy. To do so, we extend the structure of the knowledge base of fuzzy rule base systems in a hierarchical way, in order to make it more flexible. This flexibilization will allow us to have linguistic rules defined over linguistic partitions with different granularity levels, and thus to improve the modeling of those problem subspaces where the former models have bad performance
A satisfactory-oriented approach to multiexpert decision-making with linguistic assessments. This paper proposes a multiexpert decision-making (MEDM) method with linguistic assessments, making use of the notion of random preferences and a so-called satisfactory principle. It is well known that decision-making problems that manage preferences from different experts follow a common resolution scheme composed of two phases: an aggregation phase that combines the individual preferences to obtain a collective preference value for each alternative; and an exploitation phase that orders the collective preferences according to a given criterion, to select the best alternative/s. For our method, instead of using an aggregation operator to obtain a collective preference value, a random preference is defined for each alternative in the aggregation phase. Then, based on a satisfactory principle defined in this paper, that says that it is perfectly satisfactory to select an alternative as the best if its performance is as at least "good" as all the others under the same evaluation scheme, we propose a linguistic choice function to establish a rank ordering among the alternatives. Moreover, we also discuss how this linguistic decision rule can be applied to the MEDM problem in multigranular linguistic contexts. Two application examples taken from the literature are used to illuminate the proposed techniques.
A causal and effect decision making model of service quality expectation using grey-fuzzy DEMATEL approach This research uses a solution based on a combined grey-fuzzy DEMATEL method to deal with the objective of the study. This study is aimed to present a perception approach to deal with real estate agent service quality expectation ranking with uncertainty. The ranking of best top five real estate agents might be a key strategic direction of other real estate agents prior to service quality expectation. The solving procedure is as follows: (i) the weights of criteria and alternatives are described in triangular fuzzy numbers; (ii) a grey possibility degree is used to result the ranking order for all alternatives; (iii) DEMATEL is used to resolve interdependency relationships among the criteria and (iv) an empirical example of real estate agent service quality ranking problem in customer expectation is used to resolve with this proposed method approach indicating that real estate agent R"1 (CY real estate agent) is the best selection in terms of service quality in customer expectation.
Automorphisms Of The Algebra Of Fuzzy Truth Values This paper is an investigation of the automorphisms of the algebra of truth values of type-2 fuzzy sets. This algebra contains isomorphic copies of the truth value algebras of type-1 and of iriterval-valued fuzzy sets. It is shown that these subalgebras are characteristic; that is, are carried onto themselves by automorphisms of the containing algebra of truth values of fuzzy sets. Some other relevant subalgebras are proved characteristic, including the subalgebra of convex normal functions. The principal tool in this study is the determination of various irreducible elements.
Compressive sensing for sparsely excited speech signals Compressive sensing (CS) has been proposed for signals with sparsity in a linear transform domain. We explore a signal dependent unknown linear transform, namely the impulse response matrix operating on a sparse excitation, as in the linear model of speech production, for recovering compressive sensed speech. Since the linear transform is signal dependent and unknown, unlike the standard CS formulation, a codebook of transfer functions is proposed in a matching pursuit (MP) framework for CS recovery. It is found that MP is efficient and effective to recover CS encoded speech as well as jointly estimate the linear model. Moderate number of CS measurements and low order sparsity estimate will result in MP converge to the same linear transform as direct VQ of the LP vector derived from the original signal. There is also high positive correlation between signal domain approximation and CS measurement domain approximation for a large variety of speech spectra.
Handling Fuzziness In Temporal Databases This paper proposes a new data model, called FuzzTime, which is capable of handling both aspects of fuzziness and time of data. These two features can always be encountered simultaneously in many applications. This work is aimed to be a conceptual framework for advanced applications of database systems. Our approach has extended the concept of the relational data model to have such a capability. The notion of linguistic variables, fuzzy set theory and possibility theory have been employed in handling the fuzziness aspect, and the discrete time model has been assumed. Some important time-related operators to be used in a temporal query evaluation with an existence of fuzziness are also discussed.
Generating realistic stimuli for accurate power grid analysis Power analysis tools are an integral component of any current power sign-off methodology. The performance of a design's power grid affects the timing and functionality of a circuit, directly impacting the overall performance. Ensuring power grid robustness implies taking into account, among others, static and dynamic effects of voltage drop, ground bounce, and electromigration. This type of verification is usually done by simulation, targeting a worst-case scenario where devices, switching almost simultaneously, could impose stern current demands on the power grid. While determination of the exact worst-case switching conditions from the grid perspective is usually not practical, the choice of simulation stimuli has a critical effect on the results of the analysis. Targetting safe but unrealistic settings could lead to pessimistic results and costly overdesigns in terms of die area. In this article we describe a software tool that generates a reasonable, realistic, set of stimuli for simulation. The approach proposed accounts for timing and spatial restrictions that arise from the circuit's netlist and placement and generates an approximation to the worst-case condition. The resulting stimuli indicate that only a fraction of the gates change in any given timing window, leading to a more robust verification methodology, especially in the dynamic case. Generating such stimuli is akin to performing a standard static timing analysis, so the tool fits well within conventional design frameworks. Furthermore, the tool can be used for hotspot detection in early design stages.
1.007657
0.006847
0.006847
0.002334
0.001764
0.001351
0.000678
0.00036
0.000121
0.000043
0.000005
0
0
0
View Scalable Multiview Video Coding Using 3-D Warping With Depth Map Multiview video coding demands high compression rates as well as view scalability, which enables the video to be displayed on a multitude of different terminals. In order to achieve view scalability, it is necessary to limit the inter-view prediction structure. In this paper, we propose a new multiview video coding scheme that can improve the compression efficiency under such a limited inter-view prediction structure. All views are divided into two groups in the proposed scheme: base view and enhancement views. The proposed scheme first estimates a view-dependent geometry of the base view. It then uses a video encoder to encode the video of base view. The view-dependent geometry is also encoded by the video encoder. The scheme then generates prediction images of enhancement views from the decoded video and the view-dependent geometry by using image-based rendering techniques, and it makes residual signals for each enhancement view. Finally, it encodes residual signals by the conventional video encoder as if they were regular video signals. We implement one encoder that employs this scheme by using a depth map as the view-dependent geometry and 3-D warping as the view generation method. In order to increase the coding efficiency, we adopt the following three modifications: (1) object-based interpolation on 3-D warping; (2) depth estimation with consideration of rate-distortion costs; and (3) quarter-pel accuracy depth representation. Experiments show that the proposed scheme offers about 30% higher compression efficiency than the conventional scheme, even though one depth map video is added to the original multiview video.
Shape-adaptivewavelet encoding of depth maps We present a novel depth-map codec aimed at free-viewpoint 3DTV. The proposed codec relies on a shape-adaptive wavelet transform and an explicit representation of the locations of major depth edges. Unlike classical wavelet transforms, the shape-adaptive transform generates small wavelet coefficients along depth edges, which greatly reduces the data entropy. The wavelet transform is implemented by shape-adaptive lifting, which enables fast computations and perfect reconstruction. We also develop a novel rate-constrained edge detection algorithm, which integrates the idea of significance bitplanes into the Canny edge detector. Along with a simple chain code, it provides an efficient way to extract and encode edges. Experimental results on synthetic and real data confirm the effectiveness of the proposed algorithm, with PSNR gains of 5 dB and more over the Middlebury dataset.
A new methodology to derive objective quality assessment metrics for scalable multiview 3D video coding With the growing demand for 3D video, efforts are underway to incorporate it in the next generation of broadcast and streaming applications and standards. 3D video is currently available in games, entertainment, education, security, and surveillance applications. A typical scenario for multiview 3D consists of several 3D video sequences captured simultaneously from the same scene with the help of multiple cameras from different positions and through different angles. Multiview video coding provides a compact representation of these multiple views by exploiting the large amount of inter-view statistical dependencies. One of the major challenges in this field is how to transmit the large amount of data of a multiview sequence over error prone channels to heterogeneous mobile devices with different bandwidth, resolution, and processing/battery power, while maintaining a high visual quality. Scalable Multiview 3D Video Coding (SMVC) is one of the methods to address this challenge; however, the evaluation of the overall visual quality of the resulting scaled-down video requires a new objective perceptual quality measure specifically designed for scalable multiview 3D video. Although several subjective and objective quality assessment methods have been proposed for multiview 3D sequences, no comparable attempt has been made for quality assessment of scalable multiview 3D video. In this article, we propose a new methodology to build suitable objective quality assessment metrics for different scalable modalities in multiview 3D video. Our proposed methodology considers the importance of each layer and its content as a quality of experience factor in the overall quality. Furthermore, in addition to the quality of each layer, the concept of disparity between layers (inter-layer disparity) and disparity between the units of each layer (intra-layer disparity) is considered as an effective feature to evaluate overall perceived quality more accurately. Simulation results indicate that by using this methodology, more efficient objective quality assessment metrics can be introduced for each multiview 3D video scalable modalities.
Depth Reconstruction Filter and Down/Up Sampling for Depth Coding in 3-D Video A depth image represents three-dimensional (3-D) scene information and is commonly used for depth image-based rendering (DIBR) to support 3-D video and free-viewpoint video applications. The virtual view is generally rendered by the DIBR technique and its quality depends highly on the quality of depth image. Thus, efficient depth coding is crucial to realize the 3-D video system. In this letter, w...
View synthesis prediction for multiview video coding We propose a rate-distortion-optimized framework that incorporates view synthesis for improved prediction in multiview video coding. In the proposed scheme, auxiliary information, including depth data, is encoded and used at the decoder to generate the view synthesis prediction data. The proposed method employs optimal mode decision including view synthesis prediction, and sub-pixel reference matching to improve prediction accuracy of the view synthesis prediction. Novel variants of the skip and direct modes are also presented, which infer the depth and correction vector information from neighboring blocks in a synthesized reference picture to reduce the bits needed for the view synthesis prediction mode. We demonstrate two multiview video coding scenarios in which view synthesis prediction is employed. In the first scenario, the goal is to improve the coding efficiency of multiview video where block-based depths and correction vectors are encoded by CABAC in a lossless manner on a macroblock basis. A variable block-size depth/motion search algorithm is described. Experimental results demonstrate that view synthesis prediction does provide some coding gains when combined with disparity-compensated prediction. In the second scenario, the goal is to use view synthesis prediction for reducing rate overhead incurred by transmitting depth maps for improved support of 3DTV and free-viewpoint video applications. It is assumed that the complete depth map for each view is encoded separately from the multiview video and used at the receiver to generate intermediate views. We utilize this information for view synthesis prediction to improve overall coding efficiency. Experimental results show that the rate overhead incurred by coding depth maps of varying quality could be offset by utilizing the proposed view synthesis prediction techniques to reduce the bitrate required for coding multiview video.
3-D Video Representation Using Depth Maps Current 3-D video (3DV) technology is based on stereo systems. These systems use stereo video coding for pictures delivered by two input cameras. Typically, such stereo systems only reproduce these two camera views at the receiver and stereoscopic displays for multiple viewers require wearing special 3-D glasses. On the other hand, emerging autostereoscopic multiview displays emit a large numbers of views to enable 3-D viewing for multiple users without requiring 3-D glasses. For representing a large number of views, a multiview extension of stereo video coding is used, typically requiring a bit rate that is proportional to the number of views. However, since the quality improvement of multiview displays will be governed by an increase of emitted views, a format is needed that allows the generation of arbitrary numbers of views with the transmission bit rate being constant. Such a format is the combination of video signals and associated depth maps. The depth maps provide disparities associated with every sample of the video signal that can be used to render arbitrary numbers of additional views via view synthesis. This paper describes efficient coding methods for video and depth data. For the generation of views, synthesis methods are presented, which mitigate errors from depth estimation and coding.
Subjective Study On Compressed Asymmetric Stereoscopic Video Asymmetric stereoscopic video coding takes advantage of the binocular suppression of the human vision by representing one of the views with a lower quality. This paper describes a subjective quality test with asymmetric stereoscopic video. Different options for achieving compressed mixed-quality and mixed-resolution asymmetric stereo video were studied and compared to symmetric stereo video. The bitstreams for different coding arrangements were simulcast-coded according to the Advanced Video Coding (H.264/AVC) standard. The results showed that in most cases, resolution-asymmetric stereo video with the downsampling ratio of 1/2 along both coordinate axes provided similar quality as symmetric and quality-asymmetric full-resolution stereo video. These results were achieved under same bitrate constrain while the processing complexity decreased considerably. Moreover, in all test cases, the symmetric and mixed-quality full-resolution stereoscopic video bitstreams resulted in a similar quality at the same bitrates.
Transport and Storage Systems for 3-D Video Using MPEG-2 Systems, RTP, and ISO File Format Three-dimensional video based on stereo and multiview video representations is currently being introduced to the home through various channels, including broadcast such as via cable, terrestrial and satellite transmission, streaming and download through the Internet, as well as on storage media such as Blu-ray discs. In order to deliver 3-D content to the consumer, different media system technologies have been standardized or are currently under development. The most important standards are MPEG-2 systems, which is used for digital broadcast and storage on Blu-ray discs, real-time transport protocol (RTP), which is used for real-time transmissions over the Internet, and the ISO base media file format, which can be used for progressive download in video-on-demand applications. In this paper, we give an overview of these three system layer approaches, where the main focus is on the multiview video coding (MVC) extension of H.264/AVC and the application of the system approaches to the delivery and storage of MVC.
On the way towards fourth-generation mobile: 3GPP LTE and LTE-advanced Long-TermEvolution (LTE) is the new standard recently specified by the 3GPP on the way towards fourth-generation mobile. This paper presents the main technical features of this standard as well as its performance in terms of peak bit rate and average cell throughput, among others. LTE entails a big technological improvement as compared with the previous 3G standard. However, this paper also demonstrates that LTE performance does not fulfil the technical requirements established by ITU-R to classify one radio access technology as a member of the IMT-Advanced family of standards. Thus, this paper describes the procedure followed by the 3GPP to address these challenging requirements. Through the design and optimization of new radio access techniques and a further evolution of the system, the 3GPP is laying down the foundations of the future LTE-Advanced standard, the 3GPP candidate for 4G. This paper offers a brief insight into these technological trends.
Look-ahead rate adaptation algorithm for DASH under varying network environments Dynamic Adaptive Streaming over HTTP (DASH) is slowly becoming the most popular online video streaming technology. DASH enables the video player to adapt the quality of the multimedia content being downloaded in order to match the varying network conditions. The key challenge with DASH is to decide the optimal video quality for the next video segment under the current network conditions. The aim is to download the next segment before the player experiences buffer-starvation. Several rate adaptation methodologies proposed so far rely on the TCP throughput measurements and the current buffer occupancy. However, these techniques, do not consider any information regarding the next segment that is to be downloaded. They assume that the segment sizes are uniform and assign equal weights to all the segments. However, due to the video encoding techniques employed, different segments of the video with equal playback duration are found to be of different sizes. In the current paper, we propose to list the individual segment characteristics in the Media Presentation Description (MPD) file during the preprocessing stage; this is later used in the segment download time estimations. We also propose a novel rate adaptation methodology that uses the individual segment sizes in addition to the measured TCP throughput and the buffer occupancy estimate for the best video rate to be used for the next segments.
A Definition of a Nonprobabilistic Entropy in the Setting of Fuzzy Sets Theory
Combination of interval-valued fuzzy set and soft set The soft set theory, proposed by Molodtsov, can be used as a general mathematical tool for dealing with uncertainty. By combining the interval-valued fuzzy set and soft set models, the purpose of this paper is to introduce the concept of the interval-valued fuzzy soft set. The complement, ''AND'' and ''OR'' operations are defined on the interval-valued fuzzy soft sets. The DeMorgan's, associative and distribution laws of the interval-valued fuzzy soft sets are then proved. Finally, a decision problem is analyzed by the interval-valued fuzzy soft set. Some numerical examples are employed to substantiate the conceptual arguments.
On the sparseness of 1-norm support vector machines. There is some empirical evidence available showing that 1-norm Support Vector Machines (1-norm SVMs) have good sparseness; however, both how good sparseness 1-norm SVMs can reach and whether they have a sparser representation than that of standard SVMs are not clear. In this paper we take into account the sparseness of 1-norm SVMs. Two upper bounds on the number of nonzero coefficients in the decision function of 1-norm SVMs are presented. First, the number of nonzero coefficients in 1-norm SVMs is at most equal to the number of only the exact support vectors lying on the +1 and -1 discriminating surfaces, while that in standard SVMs is equal to the number of support vectors, which implies that 1-norm SVMs have better sparseness than that of standard SVMs. Second, the number of nonzero coefficients is at most equal to the rank of the sample matrix. A brief review of the geometry of linear programming and the primal steepest edge pricing simplex method are given, which allows us to provide the proof of the two upper bounds and evaluate their tightness by experiments. Experimental results on toy data sets and the UCI data sets illustrate our analysis.
Path criticality computation in parameterized statistical timing analysis This paper presents a method to compute criticality probabilities of paths in parameterized statistical static timing analysis (SSTA). We partition the set of all the paths into several groups and formulate the path criticality into a joint probability of inequalities. Before evaluating the joint probability directly, we simplify the inequalities through algebraic elimination, handling topological correlation. Our proposed method uses conditional probabilities to obtain the joint probability, and statistics of random variables representing process parameters are changed due to given conditions. To calculate the conditional statistics of the random variables, we derive analytic formulas by extending Clark's work. This allows us to obtain the conditional probability density function of a path delay, given the path is critical, as well as to compute criticality probabilities of paths. Our experimental results show that the proposed method provides 4.2X better accuracy on average in comparison to the state-of-art method.
1.030193
0.028999
0.028571
0.021751
0.009696
0.005693
0.000597
0.000061
0.000006
0
0
0
0
0
Differential RAID: rethinking RAID for SSD reliability Deployment of SSDs in enterprise settings is limited by the low erase cycles available on commodity devices. Redundancy solutions such as RAID can potentially be used to protect against the high Bit Error Rate (BER) of aging SSDs. Unfortunately, such solutions wear out redundant devices at similar rates, inducing correlated failures as arrays age in unison. We present Diff-RAID, a new RAID variant that distributes parity unevenly across SSDs to create age disparities within arrays. By doing so, Diff-RAID balances the high BER of old SSDs against the low BER of young SSDs. Diff-RAID provides much greater reliability for SSDs compared to RAID-4 and RAID-5 for the same space overhead, and offers a trade-off curve between throughput and reliability.
On efficient wear leveling for large-scale flash-memory storage systems Flash memory won its edge over many other storage media for embedded systems, because it provides better tolerance to the extreme environments which embedded systems are exposed to. In this paper, techniques referred to as wear leveling for the lengthening of flash-memory overall lifespan are considered. This paper presents the dual-pool algorithm, which realizes two key ideas: To cease the wearing of blocks by storing cold data, and to smartly leave alone blocks until wear leveling takes effect. The proposed algorithm requires no complicated tuning, and it resists changes of spatial locality in workloads. Extensive evaluation and comparison were conducted, and the merits of the proposed algorithm are justified in terms of wear-leveling performance and resource conservation.
A set-based mapping strategy for flash-memory reliability enhancement With wide applicability of flash memory in various application domains, reliability has become a very critical issue. This research is motivated by the needs to resolve the lifetime problem of flash memory and a strong demand in turning thrown-away flash-memory chips into downgraded products. We proposes a set-based mapping strategy with an effective implementation and low resource requirements, e.g., SRAM. A configurable management design and wear-leveling issue are considered. The behavior of the proposed method is also analyzed with respect to popular implementations in the industry.We show that the endurance of flash memory can be significantly improved by a series of experiments over a realistic trace. Our experiments show that the read performance is even largely improved.
A commitment-based management strategy for the performance and reliability enhancement of flash-memory storage systems Cost has been a major driving force in the development of the flash memory technology, but has also introduced serious challenges on reliability and performance for future products. In this work, we propose a commitment-based management strategy to resolve the reliability problem of many flash-memory products. A three-level address translation architecture with an adaptive block mapping mechanism is proposed to accelerate the address translation process with a limited amount of the RAM usage. Parallelism of operations over multiple chips is also explored with the considerations of the write constraints of multi-level-cell flash memory chips.
A version-based strategy for reliability enhancement of flash file systems In recent years, reliability has become one critical issue in the designs of flash file systems due to the growing unreliability of advanced flash-memory chips. In this paper, a version-based strategy with optimal space utilization is proposed to maintain the consistency among page versions of a file for potential recovery needs with the considerations of the write constraints of multi-level-cell flash memory. A series of experiments was conducted to show that the proposed strategy could improve the reliability of flash file systems with limited management and space overheads.
Statistical timing based on incomplete probabilistic descriptions of parameter uncertainty Existing approaches to timing analysis under uncertainty are based on restrictive assumptions. Statistical STA techniques assume that the full probabilistic distribution of parameter uncertainty is available; in reality, the complete probabilistic description often cannot be obtained. In this paper, a new paradigm for parameter uncertainty description is proposed as a way to consistently and rigorously handle partially available descriptions of parameter uncertainty. The paradigm is based on a theory of interval probabilistic models that permit handling uncertainty that is described in a distribution-free mode - just via the range, the mean, and the variance. This permits effectively handling multiple real-life challenges, including imprecise and limited information about the distributions of process parameters, parameters coming from different populations, and the sources of uncertainty that are too difficult to handle via full probabilistic measures (e.g. on-chip supply voltage variation). Specifically, analytical techniques for bounding the distributions of probabilistic interval variables are proposed. Besides, a provably correct strategy for fast Monte Carlo simulation based on probabilistic interval variables is introduced. A path-based timing algorithm implementing the novel modeling paradigm, as well as handling the traditional variability descriptions, has been developed. The results indicate the proposed algorithm can improve the upper bound of the 90(th)-percentile circuit delay, on average, by 5.3% across the ISCAS'85 benchmark circuits, compared to the worst-case timing estimates that use only the interval information of the partially specified parameters.
Some Defects in Finite-Difference Edge Finders This work illustrates and explains various artifacts in the output of five finite difference edge finders, those of J.F. Canny (1983, 1986), R.A. Boie et al. (1986) and R.A. Boie and I.J. Cox (1987), and three variations on that of D. Marr and E.C. Hildreth (1980), reimplemented with a common output format and method of noise suppression. These artifacts include gaps in boundaries, spurious boundaries, and deformation of region shape.
A Tutorial on Support Vector Machines for Pattern Recognition The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
Reconstruction of a low-rank matrix in the presence of Gaussian noise. This paper addresses the problem of reconstructing a low-rank signal matrix observed with additive Gaussian noise. We first establish that, under mild assumptions, one can restrict attention to orthogonally equivariant reconstruction methods, which act only on the singular values of the observed matrix and do not affect its singular vectors. Using recent results in random matrix theory, we then propose a new reconstruction method that aims to reverse the effect of the noise on the singular value decomposition of the signal matrix. In conjunction with the proposed reconstruction method we also introduce a Kolmogorov–Smirnov based estimator of the noise variance.
Proceedings of the 41th Design Automation Conference, DAC 2004, San Diego, CA, USA, June 7-11, 2004
Using polynomial chaos to compute the influence of multiple random surfers in the PageRank model The PageRank equation computes the importance of pages in a web graph relative to a single random surfer with a constant teleportation coefficient. To be globally relevant, the teleportation coefficient should account for the influence of all users. Therefore, we correct the PageRank formulation by modeling the teleportation coefficient as a random variable distributed according to user behavior. With this correction, the PageRank values themselves become random. We present two methods to quantify the uncertainty in the random PageRank: a Monte Carlo sampling algorithm and an algorithm based the truncated polynomial chaos expansion of the random quantities. With each of these methods, we compute the expectation and standard deviation of the PageRanks. Our statistical analysis shows that the standard deviation of the PageRanks are uncorrelated with the PageRank vector.
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
Fuzzy Concepts and Formal Methods: A Fuzzy Logic Toolkit for Z It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However some system problems, particularly those drawn from the IS problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper suggests fuzzy set theory as a possible representation scheme for this imprecision or approximation. We provide a summary of a toolkit that defines the operators, measures and modifiers necessary for the manipulation of fuzzy sets and relations. We also provide some examples of the laws which establishes an isomorphism between the extended notation presented here and conventional Z when applied to boolean sets and relations.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.2105
0.2105
0.2105
0.10525
0.0025
0
0
0
0
0
0
0
0
0
Automorphisms Of The Algebra Of Fuzzy Truth Values This paper is an investigation of the automorphisms of the algebra of truth values of type-2 fuzzy sets. This algebra contains isomorphic copies of the truth value algebras of type-1 and of iriterval-valued fuzzy sets. It is shown that these subalgebras are characteristic; that is, are carried onto themselves by automorphisms of the containing algebra of truth values of fuzzy sets. Some other relevant subalgebras are proved characteristic, including the subalgebra of convex normal functions. The principal tool in this study is the determination of various irreducible elements.
Development of a type-2 fuzzy proportional controller Studies have shown that PID controllers can be realized by type-1 (conventional) fuzzy logic systems (FLSs). However, the input-output mappings of such fuzzy PID controllers are fixed. The control performance would, therefore, vary if the system parameters are uncertain. This paper aims at developing a type-2 FLS to control a process whose parameters are uncertain. A method for designing type-2 triangular membership functions with the desired generalized centroid is first proposed. By using this type-2 fuzzy set to partition the output domain, a type-2 fuzzy proportional controller is obtained. It is shown that the type-2 fuzzy logic system is equivalent to a proportional controller that may assume a range of gains. Simulation results are presented to demonstrate that the performance of the proposed controller can be maintained even when the system parameters deviate from their nominal values.
Some general comments on fuzzy sets of type-2 This paper contains some general comments on the algebra of truth values of fuzzy sets of type 2. It details the precise mathematical relationship with the algebras of truth values of ordinary fuzzy sets and of interval-valued fuzzy sets. Subalgebras of the algebra of truth values and t-norms on them are discussed. There is some discussion of finite type-2 fuzzy sets. © 2008 Wiley Periodicals, Inc.
Sensed Signal Strength Forecasting for Wireless Sensors Using Interval Type-2 Fuzzy Logic System. In this paper, we present a new approach for sensed signal strength forecasting in wireless sensors using interval type-2 fuzzy logic system (FLS). We show that a type-2 fuzzy membership function, i.e., a Gaussian MF with uncertain mean is most appropriate to model the sensed signal strength of wireless sensors. We demonstrate that the sensed signals of wireless sensors are self-similar, which means it can be forecasted. An interval type-2 FLS is designed for sensed signal forecasting and is compared against a type-1 FLS. Simulation results show that the interval type-2 FLS performs much better than the type-1 FLS in sensed signal forecasting. This application can be further used for power on/off control in wireless sensors to save battery energy.
T-Norms for Type-2 Fuzzy Sets This paper is concerned with the definition of t-norms on the algebra of truth values of type-2 fuzzy sets. Our proposed definition extends the definition of ordinary t-norms on the unit interval and extends our definition of t-norms on the algebra of truth values for interval-valued fuzzy sets.
Pattern recognition using type-II fuzzy sets Type II fuzzy sets are a generalization of the ordinary fuzzy sets in which the membership value for each member of the set is itself a fuzzy set in [0, 1]. We introduce a similarity measure for measuring the similarity, or compatibility, between two type-II fuzzy sets. With this new similarity measure we show that type-II fuzzy sets provide us with a natural language for formulating classification problems in pattern recognition.
Xor-Implications and E-Implications: Classes of Fuzzy Implications Based on Fuzzy Xor The main contribution of this paper is to introduce an autonomous definition of the connective ''fuzzy exclusive or'' (fuzzy Xor, for short), which is independent from others connectives. Also, two canonical definitions of the connective Xor are obtained from the composition of fuzzy connectives, and based on the commutative and associative properties related to the notions of triangular norms, triangular conorms and fuzzy negations. We show that the main properties of the classical connective Xor are preserved by the connective fuzzy Xor, and, therefore, this new definition of the connective fuzzy Xor extends the related classical approach. The definitions of fuzzy Xor-implications and fuzzy E-implications, induced by the fuzzy Xor connective, are also studied, and their main properties are analyzed. The relationships between the fuzzy Xor-implications and the fuzzy E-implications with automorphisms are explored.
Multivariate modeling and type-2 fuzzy sets This paper explores the link between type-2 fuzzy sets and multivariate modeling. Elements of a space X are treated as observations fuzzily associated with values in a multivariate feature space. A category or class is likewise treated as a fuzzy allocation of feature values (possibly dependent on values in X). We observe that a type-2 fuzzy set on X generated by these two fuzzy allocations captures imprecision in the class definition and imprecision in the observations. In practice many type-2 fuzzy sets are in fact generated in this way and can therefore be interpreted as the output of a classification task. We then show that an arbitrary type-2 fuzzy set can be so constructed, by taking as a feature space a set of membership functions on X. This construction presents a new perspective on the Representation Theorem of Mendel and John. The multivariate modeling underpinning the type-2 fuzzy sets can also constrain realizable forms of membership functions. Because averaging operators such as centroid and subsethood on type-2 fuzzy sets involve a search for optima over membership functions, constraining this search can make computation easier and tighten the results. We demonstrate how the construction can be used to combine representations of concepts and how it therefore provides an additional tool, alongside standard operations such as intersection and subsethood, for concept fusion and computing with words.
A comparative study of ranking methods, similarity measures and uncertainty measures for interval type-2 fuzzy sets Ranking methods, similarity measures and uncertainty measures are very important concepts for interval type-2 fuzzy sets (IT2 FSs). So far, there is only one ranking method for such sets, whereas there are many similarity and uncertainty measures. A new ranking method and a new similarity measure for IT2 FSs are proposed in this paper. All these ranking methods, similarity measures and uncertainty measures are compared based on real survey data and then the most suitable ranking method, similarity measure and uncertainty measure that can be used in the computing with words paradigm are suggested. The results are useful in understanding the uncertainties associated with linguistic terms and hence how to use them effectively in survey design and linguistic information processing.
Similarity Measures Between Type-2 Fuzzy Sets In this paper, we give similarity measures between type-2 fuzzy sets and provide the axiom definition and properties of these measures. For practical use, we show how to compute the similarities between Gaussian type-2 fuzzy sets. Yang and Shih's [22] algorithm, a clustering method based on fuzzy relations by beginning with a similarity matrix, is applied to these Gaussian type-2 fuzzy sets by beginning with these similarities. The clustering results are reasonable consisting of a hierarchical tree according to different levels.
Structure segmentation and recognition in images guided by structural constraint propagation In some application domains, such as medical imaging, the objects that compose the scene are known as well as some of their properties and their spatial arrangement. We can take advantage of this knowledge to perform the segmentation and recognition of structures in medical images. We propose here to formalize this problem as a constraint network and we perform the segmentation and recognition by iterative domain reductions, the domains being sets of regions. For computational purposes we represent the domains by their upper and lower bounds and we iteratively reduce the domains by updating their bounds. We show some preliminary results on normal and pathological brain images.
Sublinear time, measurement-optimal, sparse recovery for all An approximate sparse recovery system in l1 norm makes a small number of measurements of a noisy vector with at most k large entries and recovers those heavy hitters approximately. Formally, it consists of parameters N, k, ε, an m-by-N measurement matrix, φ, and a decoding algorithm, D. Given a vector, x, where xk denotes the optimal k-term approximation to x, the system approximates x by [EQUATION], which must satisfy [EQUATION] Among the goals in designing such systems are minimizing the number m of measurements and the runtime of the decoding algorithm, D. We consider the "forall" model, in which a single matrix φ, possibly "constructed" non-explicitly using the probabilistic method, is used for all signals x. Many previous papers have provided algorithms for this problem. But all such algorithms that use the optimal number m = O(k log(N/k)) of measurements require superlinear time Ω(N log(N/k)). In this paper, we give the first algorithm for this problem that uses the optimum number of measurements (up to constant factors) and runs in sublinear time o(N) when k is sufficiently less than N. Specifically, for any positive integer l, our approach uses time O(l5ε-3k(N/k)1/l) and uses m = O(l8ε-3k log(N/k)) measurements, with access to a data structure requiring space and preprocessing time O(lNk0.2/ε).
Parallel Opportunistic Routing in Wireless Networks We study benefits of opportunistic routing in a large wireless ad hoc network by examining how the power, delay, and total throughput scale as the number of source–destination pairs increases up to the operating maximum. Our opportunistic routing is novel in a sense that it is massively parallel, i.e., it is performed by many nodes simultaneously to maximize the opportunistic gain while controlling the interuser interference. The scaling behavior of conventional multihop transmission that does not employ opportunistic routing is also examined for comparison. Our main results indicate that our opportunistic routing can exhibit a net improvement in overall power–delay tradeoff over the conventional routing by providing up to a logarithmic boost in the scaling law. Such a gain is possible since the receivers can tolerate more interference due to the increased received signal power provided by the multi user diversity gain, which means that having more simultaneous transmissions is possible.
Evaluating process performance based on the incapability index for measurements with uncertainty Process capability indices are widely used in industry to measure the ability of firms or their suppliers to meet quality specifications. The index C"P"P, which is easy to use and analytically tractable, has been successfully developed and applied by competitive firms to dominate highly-profitable markets by improving quality and productivity. Hypothesis testing is very essential for practical decision-making. Generally, the underlying data are assumed to be precise numbers, but in general it is much more realistic to consider fuzzy values, which are imprecise numbers. In this case, the test statistic also yields an imprecise number, and decision rules based on the crisp-based approach are inappropriate. This study investigates the situation of uncertain or imprecise product quality measurements. A set of confidence intervals for sample mean and variance is used to produce triangular fuzzy numbers for estimating the C"P"P index. Based on the @d-cuts of the fuzzy estimators, a decision testing rule and procedure are developed to evaluate process performance based on critical values and fuzzy p-values. An efficient computer program is also designed for calculating fuzzy p-values. Finally, an example is examined for demonstrating the application of the proposed approach.
1.022465
0.024448
0.019227
0.014641
0.008308
0.001918
0.000234
0.000087
0.000042
0.000014
0.000001
0
0
0
Fuzzy decision making with immediate probabilities We developed a new decision-making model with probabilistic information and used the concept of the immediate probability to aggregate the information. This type of probability modifies the objective probability by introducing the attitudinal character of the decision maker. In doing so, we use the ordered weighting average (OWA) operator. When using this model, it is assumed that the information is given by exact numbers. However, this may not be the real situation found within the decision-making problem. Sometimes, the information is vague or imprecise and it is necessary to use another approach to assess the information, such as the use of fuzzy numbers. Then, the decision-making problem can be represented more completely because we now consider the best and worst possible scenarios, along with the possibility that some intermediate event (an internal value) will occur. We will use the fuzzy ordered weighted averaging (FOWA) operator to aggregate the information with the probabilities. As a result, we will get the Immediate Probability-FOWA (IP-FOWA) operator. We will study some of its main properties. We will apply the new approach in a decision-making problem about selection of strategies.
Comparing approximate reasoning and probabilistic reasoning using the Dempster--Shafer framework We investigate the problem of inferring information about the value of a variable V from its relationship with another variable U and information about U. We consider two approaches, one using the fuzzy set based theory of approximate reasoning and the other using probabilistic reasoning. Both of these approaches allow the inclusion of imprecise granular type information. The inferred values from each of these methods are then represented using a Dempster-Shafer belief structure. We then compare these values and show an underling unity between these two approaches.
FIOWHM operator and its application to multiple attribute group decision making To study the problem of multiple attribute decision making in which the decision making information values are triangular fuzzy number, a new group decision making method is proposed. Then the calculation steps to solve it are given. As the key step, a new operator called fuzzy induced ordered weighted harmonic mean (FIOWHM) operator is proposed and a method based on the fuzzy weighted harmonic mean (FWHM) operator and FIOWHM operators for fuzzy MAGDM is presented. The priority based on possibility degree for the fuzzy multiple attribute decision making problem is proposed. At last, a numerical example is provided to illustrate the proposed method. The result shows the approach is simple, effective and easy to calculate.
A Method Based on OWA Operator and Distance Measures for Multiple Attribute Decision Making with 2-Tuple Linguistic Information In this paper we develop a new method for 2-tuple linguistic multiple attribute decision making, namely the 2-tuple linguistic generalized ordered weighted averaging distance (2LGOWAD) operator. This operator is an extension of the OWA operator that utilizes generalized means, distance measures and uncertain information represented as 2-tuple linguistic variables. By using 2LGOWAD, it is possible to obtain a wide range of 2-tuple linguistic aggregation distance operators such as the 2-tuple linguistic maximum distance, the 2-tuple linguistic minimum distance, the 2-tuple linguistic normalized Hamming distance (2LNHD), the 2-tuple linguistic weighted Hamming distance (2LWHD), the 2-tuple linguistic normalized Euclidean distance (2LNED), the 2-tuple linguistic weighted Euclidean distance (2LWED), the 2-tuple linguistic ordered weighted averaging distance (2LOWAD) operator and the 2-tuple linguistic Euclidean ordered weighted averaging distance (2LEOWAD) operator. We study some of its main properties, and we further generalize the 2LGOWAD operator using quasi-arithmetic means. The result is the Quasi-2LOWAD operator. Finally we present an application of the developed operators to decision-making regarding the selection of investment strategies.
Fuzzy induced generalized aggregation operators and its application in multi-person decision making We present a wide range of fuzzy induced generalized aggregation operators such as the fuzzy induced generalized ordered weighted averaging (FIGOWA) and the fuzzy induced quasi-arithmetic OWA (Quasi-FIOWA) operator. They are aggregation operators that use the main characteristics of the fuzzy OWA (FOWA) operator, the induced OWA (IOWA) operator and the generalized (or quasi-arithmetic) OWA operator. Therefore, they use uncertain information represented in the form of fuzzy numbers, generalized (or quasi-arithmetic) means and order inducing variables. The main advantage of these operators is that they include a wide range of mean operators such as the FOWA, the IOWA, the induced Quasi-OWA, the fuzzy IOWA, the fuzzy generalized mean and the fuzzy weighted quasi-arithmetic average (Quasi-FWA). We further generalize this approach by using Choquet integrals, obtaining the fuzzy induced quasi-arithmetic Choquet integral aggregation (Quasi-FICIA) operator. We also develop an application of the new approach in a strategic multi-person decision making problem.
Decision making with extended fuzzy linguistic computing, with applications to new product development and survey analysis Fuzzy set theory, with its ability to capture and process uncertainties and vagueness inherent in subjective human reasoning, has been under continuous development since its introduction in the 1960s. Recently, the 2-tuple fuzzy linguistic computing has been proposed as a methodology to aggregate fuzzy opinions ( Herrera & Martinez, 2000a, 2000b ), for example, in the evaluation of new product development performance ( Wang, 2009 ) and in customer satisfactory level survey analysis ( Lin & Lee, 2009 ). The 2-tuple fuzzy linguistic approach has the advantage of avoiding information loss that can potentially occur when combining opinions of experts. Given the fuzzy ratings of the evaluators, the computation procedure used in both Wang (2009) and Lin and Lee (2009) returned a single crisp value as an output, representing the average judgment of those evaluators. In this article, we take an alternative view that the result of aggregating fuzzy ratings should be fuzzy itself, and therefore we further develop the 2-tuple fuzzy linguistic methodology so that its output is a fuzzy number describing the aggregation of opinions. We demonstrate the utility of the extended fuzzy linguistic computing methodology by applying it to two data sets: (i) the evaluation of a new product idea in a Taiwanese electronics manufacturing firm and (ii) the evaluation of the investment benefit of a proposed facility site.
A sequential selection process in group decision making with a linguistic assessment approach In this paper a Sequential Selection Process in Group Decision Making underlinguistic assessments is presented, where a set of linguistic preference relationsrepresents individuals preferences. A collective linguistic preference is obtained bymeans of a defined linguistic ordered weighted averaging operator whose weightsare chosen according to the concept of fuzzy majority, specified by a fuzzy linguisticquantifier. Then we define the concepts of linguistic nondominance, linguistic...
Fuzzy multiple criteria forestry decision making based on an integrated VIKOR and AHP approach Forestation and forest preservation in urban watersheds are issues of vital importance as forested watersheds not only preserve the water supplies of a city but also contribute to soil erosion prevention. The use of fuzzy multiple criteria decision aid (MCDA) in urban forestation has the advantage of rendering subjective and implicit decision making more objective and transparent. An additional merit of fuzzy MCDA is its ability to accommodate quantitative and qualitative data. In this paper an integrated VIKOR-AHP methodology is proposed to make a selection among the alternative forestation areas in Istanbul. In the proposed methodology, the weights of the selection criteria are determined by fuzzy pairwise comparison matrices of AHP. It is found that Omerli watershed is the most appropriate forestation district in Istanbul.
Web-based Multi-Criteria Group Decision Support System with Linguistic Term Processing Function Organizational decisions are often made in groups where group members may be distributed geographically in different locations. Furthermore, a decision-making process, in practice, frequently involves various uncertain factors including linguistic expressions of decision makers' preferences and opinions. This study first proposes a rational-political group decision-making model which identifies three uncertain factors involved in a group decision-making process: decision makers' roles in a group reaching a satisfactory solution, preferences for alternatives and judgments for assessment-criteria. Based on the model, a linguistic term oriented multi-criteria group decision-making method is developed. The method uses general fuzzy number to deal with the three uncertain factors described by linguistic terms and aggregates these factors into a group satisfactory decision that is in a most acceptable degree of the group. Moreover, this study implements the method by developing a web-based group decision support system. This system allows decision makers to participate a group decision-making through the web, and manages the group decision-making process as a whole, from criteria generation, alternative evaluation, opinions interaction to decision aggregation. Finally, an application of the system is presented to illustrate the web-based group decision support system.
Perceptual reasoning for perceptual computing: a similarity-based approach Perceptual reasoning (PR) is an approximate reasoning method that can be used as a computing-with-words (CWW) engine in perceptual computing. There can be different approaches to implement PR, e.g., firing-interval-based PR (FI-PR), which has been proposed in J. M. Mendel and D. Wu, IEEE Trans. Fuzzy Syst., vol. 16, no. 6, pp. 1550-1564, Dec. 2008 and similarity-based PR (SPR), which is proposed in this paper. Both approaches satisfy the requirement on a CWW engine that the result of combining fired rules should lead to a footprint of uncertainty (FOU) that resembles the three kinds of FOUs in a CWW codebook. A comparative study shows that S-PR leads to output FOUs that resemble word FOUs, which are obtained from subject data, much more closely than FI-PR; hence, S-PR is a better choice for a CWW engine than FI-PR.
Systematic image processing for diagnosing brain tumors: A Type-II fuzzy expert system approach This paper presents a systematic Type-II fuzzy expert system for diagnosing the human brain tumors (Astrocytoma tumors) using T"1-weighted Magnetic Resonance Images with contrast. The proposed Type-II fuzzy image processing method has four distinct modules: Pre-processing, Segmentation, Feature Extraction, and Approximate Reasoning. We develop a fuzzy rule base by aggregating the existing filtering methods for Pre-processing step. For Segmentation step, we extend the Possibilistic C-Mean (PCM) method by using the Type-II fuzzy concepts, Mahalanobis distance, and Kwon validity index. Feature Extraction is done by Thresholding method. Finally, we develop a Type-II Approximate Reasoning method to recognize the tumor grade in brain MRI. The proposed Type-II expert system has been tested and validated to show its accuracy in the real world. The results show that the proposed system is superior in recognizing the brain tumor and its grade than Type-I fuzzy expert systems.
Gossip Algorithms for Distributed Signal Processing Gossip algorithms are attractive for in-network processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and...
Sublinear compressive sensing reconstruction via belief propagation decoding We propose a new compressive sensing scheme, based on codes of graphs, that allows for joint design of sensing matrices and low complexity reconstruction algorithms. The compressive sensing matrices can be shown to offer asymptotically optimal performance when used in combination with OMP methods. For more elaborate greedy reconstruction schemes, we propose a new family of list decoding and multiple-basis belief propagation algorithms. Our simulation results indicate that the proposed CS scheme offers good complexity-performance tradeoffs for several classes of sparse signals.
Thermal switching error versus delay tradeoffs in clocked QCA circuits The quantum-dot cellular automata (QCA) model offers a novel nano-domain computing architecture by mapping the intended logic onto the lowest energy configuration of a collection of QCA cells, each with two possible ground states. A four-phased clocking scheme has been suggested to keep the computations at the ground state throughout the circuit. This clocking scheme, however, induces latency or delay in the transmission of information from input to output. In this paper, we study the interplay of computing error behavior with delay or latency of computation induced by the clocking scheme. Computing errors in QCA circuits can arise due to the failure of the clocking scheme to switch portions of the circuit to the ground state with change in input. Some of these non-ground states will result in output errors and some will not. The larger the size of each clocking zone, i.e., the greater the number of cells in each zone, the more the probability of computing errors. However, larger clocking zones imply faster propagation of information from input to output, i.e., reduced delay. Current QCA simulators compute just the ground state configuration of a QCA arrangement. In this paper, we offer an efficient method to compute the N-lowest energy modes of a clocked QCA circuit. We model the QCA cell arrangement in each zone using a graph-based probabilistic model, which is then transformed into a Markov tree structure defined over subsets of QCA cells. This tree structure allows us to compute the N-lowest energy configurations in an efficient manner by local message passing. We analyze the complexity of the model and show it to be polynomial in terms of the number of cells, assuming a finite neighborhood of influence for each QCA cell, which is usually the case. The overall low-energy spectrum of multiple clocking zones is constructed by concatenating the low-energy spectra of the individual clocking zones. We demonstrate how the model can be used to study the tradeoff betwee- - n switching errors and clocking zones.
1.021808
0.024
0.022362
0.020677
0.007839
0.003506
0.001297
0.000357
0.000171
0.000078
0.000001
0
0
0
Uncertainty quantification of electronic and photonic ICs with non-Gaussian correlated process variations Since the invention of generalized polynomial chaos in 2002, uncertainty quantification has impacted many engineering fields, including variation-aware design automation of integrated circuits and integrated photonics. Due to the fast convergence rate, the generalized polynomial chaos expansion has achieved orders-of-magnitude speedup than Monte Carlo in many applications. However, almost all existing generalized polynomial chaos methods have a strong assumption: the uncertain parameters are mutually independent or Gaussian correlated. This assumption rarely holds in many realistic applications, and it has been a long-standing challenge for both theorists and practitioners. This paper propose a rigorous and efficient solution to address the challenge of non-Gaussian correlation. We first extend generalized polynomial chaos, and propose a class of smooth basis functions to efficiently handle non-Gaussian correlations. Then, we consider high-dimensional parameters, and develop a scalable tensor method to compute the proposed basis functions. Finally, we develop a sparse solver with adaptive sample selections to solve high-dimensional uncertainty quantification problems. We validate our theory and algorithm by electronic and photonic ICs with 19 to 57 non-Gaussian correlated variation parameters. The results show that our approach outperforms Monte Carlo by 2500× to 3000× in terms of efficiency. Moreover, our method can accurately predict the output density functions with multiple peaks caused by non-Gaussian correlations, which is hard to handle by existing methods. Based on the results in this paper, many novel uncertainty quantification algorithms can be developed and can be further applied to a broad range of engineering domains.
Multi-Wafer Virtual Probe: Minimum-cost variation characterization by exploring wafer-to-wafer correlation In this paper, we propose a new technique, referred to as Multi-Wafer Virtual Probe (MVP) to efficiently model wafer-level spatial variations for nanoscale integrated circuits. Towards this goal, a novel Bayesian inference is derived to extract a shared model template to explore the wafer-to-wafer correlation information within the same lot. In addition, a robust regression algorithm is proposed to automatically detect and remove outliers (i.e., abnormal measurement data with large error) so that they do not bias the modeling results. The proposed MVP method is extensively tested for silicon measurement data collected from 200 wafers at an advanced technology node. Our experimental results demonstrate that MVP offers superior accuracy over other traditional approaches such as VP and EM, if a limited number of measurement data are available.
Bayesian Model Fusion: A statistical framework for efficient pre-silicon validation and post-silicon tuning of complex analog and mixed-signal circuits In this paper, we describe a novel statistical framework, referred to as Bayesian Model Fusion (BMF), that allows us to minimize the simulation and/or measurement cost for both pre-silicon validation and post-silicon tuning of analog and mixed-signal (AMS) circuits with consideration of large-scale process variations. The BMF technique is motivated by the fact that today's AMS design cycle typically spans multiple stages (e.g., schematic design, layout design, first tape-out, second tape-out, etc.). Hence, we can reuse the simulation and/or measurement data collected at an early stage to facilitate efficient validation and tuning of AMS circuits with a minimal amount of data at the late stage. The efficacy of BMF is demonstrated by using several industrial circuit examples.
Tensor Computation: A New Framework for High-Dimensional Problems in EDA. Many critical electronic design automation (EDA) problems suffer from the curse of dimensionality, i.e., the very fast-scaling computational burden produced by large number of parameters and/or unknown variables. This phenomenon may be caused by multiple spatial or temporal factors (e.g., 3-D field solvers discretizations and multirate circuit simulation), nonlinearity of devices and circuits, large number of design or optimization parameters (e.g., full-chip routing/placement and circuit sizing), or extensive process variations (e.g., variability /reliability analysis and design for manufacturability). The computational challenges generated by such high-dimensional problems are generally hard to handle efficiently with traditional EDA core algorithms that are based on matrix and vector computation. This paper presents “tensor computation” as an alternative general framework for the development of efficient EDA algorithms and tools. A tensor is a high-dimensional generalization of a matrix and a vector, and is a natural choice for both storing and solving efficiently high-dimensional EDA problems. This paper gives a basic tutorial on tensors, demonstrates some recent examples of EDA applications (e.g., nonlinear circuit modeling and high-dimensional uncertainty quantification), and suggests further open EDA problems where the use of tensor computation could be of advantage.
Stochastic Testing Method for Transistor-Level Uncertainty Quantification Based on Generalized Polynomial Chaos Uncertainties have become a major concern in integrated circuit design. In order to avoid the huge number of repeated simulations in conventional Monte Carlo flows, this paper presents an intrusive spectral simulator for statistical circuit analysis. Our simulator employs the recently developed generalized polynomial chaos expansion to perform uncertainty quantification of nonlinear transistor circuits with both Gaussian and non-Gaussian random parameters. We modify the nonintrusive stochastic collocation (SC) method and develop an intrusive variant called stochastic testing (ST) method. Compared with the popular intrusive stochastic Galerkin (SG) method, the coupled deterministic equations resulting from our proposed ST method can be solved in a decoupled manner at each time point. At the same time, ST requires fewer samples and allows more flexible time step size controls than directly using a nonintrusive SC solver. These two properties make ST more efficient than SG and than existing SC methods, and more suitable for time-domain circuit simulation. Simulation results of several digital, analog and RF circuits are reported. Since our algorithm is based on generic mathematical models, the proposed ST algorithm can be applied to many other engineering problems.
Semantics of Context-Free Languages Meaning" may be assigned to a string in a context-free language by defining "at- tributes" of the symbols in a derivation tree for that string. The attributes can be de- fined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are "synthe- sized", i.e., defined solely in terms of attributes of the descendants of the correspond- ing nonterminal symbol, while other attributes are "inherited", i.e., defined in terms of attributes of the ancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature. A simple technique for specifying the "meaning" of languages defined by context-free grammars is introduced in Section 1 of this paper, and its basic mathematical properties are investigated in Sections 2 and 3. An example which indicates how the technique can be applied to the formal definition of programming languages is described in Section 4, and finally, Section 5 contains a somewhat biased comparison of the present method to other known techniques for semantic definition. The discussion in this paper is oriented primarily towards programming languages, but the same methods appear to be relevant also in the study of natural languages. 1. Introduction. Let us st/ppose that we want to give a precise defini- tion of binary notation for numbers. This can be done in many ways, and in this section we want to consider a manner o f definition which can be gen- eralized so that the meaning of other notations can be expressed in the same way. One such way to define binary notation is to base a definition on
Sets with type-2 operations The algebra of truth values of type-2 fuzzy sets consists of all mappings of the unit interval to itself, with type-2 operations that are convolutions of ordinary max and min operations. This paper is concerned with a special subalgebra of this truth value algebra, namely the set of nonzero functions with values in the two-element set {0,1}. This algebra can be identified with the set of all non-empty subsets of the unit interval, but the operations are not the usual union and intersection. We give simplified descriptions of the operations and derive the basic algebraic properties of this algebra, including the identification of its automorphism group. We also discuss some subalgebras and homomorphisms between them and look briefly at t-norms on this algebra of sets.
Fuzzy logic systems for engineering: a tutorial A fuzzy logic system (FLS) is unique in that it is able to simultaneously handle numerical data and linguistic knowledge. It is a nonlinear mapping of an input data (feature) vector into a scalar output, i.e., it maps numbers into numbers. Fuzzy set theory and fuzzy logic establish the specifics of the nonlinear mapping. This tutorial paper provides a guided tour through those aspects of fuzzy sets and fuzzy logic that are necessary to synthesize an FLS. It does this by starting with crisp set theory and dual logic and demonstrating how both can be extended to their fuzzy counterparts. Because engineering systems are, for the most part, causal, we impose causality as a constraint on the development of the FLS. After synthesizing a FLS, we demonstrate that it can be expressed mathematically as a linear combination of fuzzy basis functions, and is a nonlinear universal function approximator, a property that it shares with feedforward neural networks. The fuzzy basis function expansion is very powerful because its basis functions can be derived from either numerical data or linguistic knowledge, both of which can be cast into the forms of IF-THEN rules
Convergence Rates of Best N-term Galerkin Approximations for a Class of Elliptic sPDEs Deterministic Galerkin approximations of a class of second order elliptic PDEs with random coefficients on a bounded domain D⊂ℝd are introduced and their convergence rates are estimated. The approximations are based on expansions of the random diffusion coefficients in L 2(D)-orthogonal bases, and on viewing the coefficients of these expansions as random parameters y=y(ω)=(y i (ω)). This yields an equivalent parametric deterministic PDE whose solution u(x,y) is a function of both the space variable x∈D and the in general countably many parameters y. We establish new regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to $V=H^{1}_{0}(D)$. These results lead to analytic estimates on the V norms of the coefficients (which are functions of x) in a so-called “generalized polynomial chaos” (gpc) expansion of u. Convergence estimates of approximations of u by best N-term truncated V valued polynomials in the variable y∈U are established. These estimates are of the form N −r , where the rate of convergence r depends only on the decay of the random input expansion. It is shown that r exceeds the benchmark rate 1/2 afforded by Monte Carlo simulations with N “samples” (i.e., deterministic solves) under mild smoothness conditions on the random diffusion coefficients. A class of fully discrete approximations is obtained by Galerkin approximation from a hierarchic family $\{V_{l}\}_{l=0}^{\infty}\subset V$of finite element spaces in D of the coefficients in the N-term truncated gpc expansions of u(x,y). In contrast to previous works, the level l of spatial resolution is adapted to the gpc coefficient. New regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to a smoothness space W⊂V are established leading to analytic estimates on the W norms of the gpc coefficients and on their space discretization error. The space W coincides with $H^{2}(D)\cap H^{1}_{0}(D)$in the case where D is a smooth or convex domain. Our analysis shows that in realistic settings a convergence rate $N_{\mathrm{dof}}^{-s}$in terms of the total number of degrees of freedom N dof can be obtained. Here the rate s is determined by both the best N-term approximation rate r and the approximation order of the space discretization in D.
Proceedings of the 47th Design Automation Conference, DAC 2010, Anaheim, California, USA, July 13-18, 2010
Restricted Eigenvalue Properties for Correlated Gaussian Designs Methods based on l1-relaxation, such as basis pursuit and the Lasso, are very popular for sparse regression in high dimensions. The conditions for success of these methods are now well-understood: (1) exact recovery in the noiseless setting is possible if and only if the design matrix X satisfies the restricted nullspace property, and (2) the squared l2-error of a Lasso estimate decays at the minimax optimal rate k log p / n, where k is the sparsity of the p-dimensional regression problem with additive Gaussian noise, whenever the design satisfies a restricted eigenvalue condition. The key issue is thus to determine when the design matrix X satisfies these desirable properties. Thus far, there have been numerous results showing that the restricted isometry property, which implies both the restricted nullspace and eigenvalue conditions, is satisfied when all entries of X are independent and identically distributed (i.i.d.), or the rows are unitary. This paper proves directly that the restricted nullspace and eigenvalue conditions hold with high probability for quite general classes of Gaussian matrices for which the predictors may be highly dependent, and hence restricted isometry conditions can be violated with high probability. In this way, our results extend the attractive theoretical guarantees on l1-relaxations to a much broader class of problems than the case of completely independent or unitary designs.
A Simple Compressive Sensing Algorithm for Parallel Many-Core Architectures In this paper we consider the l 1-compressive sensing problem. We propose an algorithm specifically designed to take advantage of shared memory, vectorized, parallel and many-core microprocessors such as the Cell processor, new generation Graphics Processing Units (GPUs) and standard vectorized multi-core processors (e.g. quad-core CPUs). Besides its implementation is easy. We also give evidence of the efficiency of our approach and compare the algorithm on the three platforms, thus exhibiting pros and cons for each of them.
A fuzzy CBR technique for generating product ideas This paper presents a fuzzy CBR (case-based reasoning) technique for generating new product ideas from a product database for enhancing the functions of a given product (called the baseline product). In the database, a product is modeled by a 100-attribute vector, 87 of which are used to model the use-scenario and 13 are used to describe the manufacturing/recycling features. Based on the use-scenario attributes and their relative weights - determined by a fuzzy AHP technique, a fuzzy CBR retrieving mechanism is developed to retrieve product-ideas that tend to enhance the functions of the baseline product. Based on the manufacturing/recycling features, a fuzzy CBR mechanism is developed to screen the retrieved product ideas in order to obtain a higher ratio of valuable product ideas. Experiments indicate that the retrieving-and-filtering mechanism outperforms the prior retrieving-only mechanism in terms of generating a higher ratio of valuable product ideas.
The laws of large numbers for fuzzy random variables The new attempt of weak and strong law of large numbers for fuzzy random variables is discussed in this paper by proposing the convergence in probability and convergence with probability one for fuzzy random variables. We first consider the limit properties of fuzzy numbers by invoking the Hausdorff metric, and then we extend it to the convergence in probability and convergence with probability one for fuzzy random variables. We provide the notion of weak and strong convergence in probability and weak and strong convergence with probability one for fuzzy random variables. Finally we come up with the weak and strong law of large numbers for fuzzy random variables in weak and strong sense. (C) 2000 Elsevier Science B.V. All rights reserved.
1.1
0.05
0.033333
0.02
0.008696
0
0
0
0
0
0
0
0
0
Exploring Strategies for Training Deep Neural Networks Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization often appears to get stuck in poor solutions. Hinton et al. recently proposed a greedy layer-wise unsupervised learning procedure relying on the training algorithm of restricted Boltzmann machines (RBM) to initialize the parameters of a deep belief network (DBN), a generative model with many layers of hidden causal variables. This was followed by the proposal of another greedy layer-wise procedure, relying on the usage of autoassociator networks. In the context of the above optimization problem, we study these algorithms empirically to better understand their success. Our experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy helps the optimization by initializing weights in a region near a good local minimum, but also implicitly acts as a sort of regularization that brings better generalization and encourages internal distributed representations that are high-level abstractions of the input. We also present a series of experiments aimed at evaluating the link between the performance of deep neural networks and practical aspects of their topology, for example, demonstrating cases where the addition of more depth helps. Finally, we empirically explore simple variants of these training algorithms, such as the use of different RBM input unit distributions, a simple way of combining gradient estimators to improve performance, as well as on-line versions of those algorithms.
Towards situated speech understanding: visual context priming of language models Fuse is a situated spoken language understanding system that uses visual context to steer the interpretation of speech. Given a visual scene and a spoken description, the system finds the object in the scene that best fits the meaning of the description. To solve this task, Fuse performs speech recognition and visually-grounded language understanding. Rather than treat these two problems separately, knowledge of the visual semantics of language and the specific contents of the visual scene are fused during speech processing. As a result, the system anticipates various ways a person might describe any object in the scene, and uses these predictions to bias the speech recognizer towards likely sequences of words. A dynamic visual attention mechanism is used to focus processing on likely objects within the scene as spoken utterances are processed. Visual attention and language prediction reinforce one another and converge on interpretations of incoming speech signals which are most consistent with visual context. In evaluations, the introduction of visual context into the speech recognition process results in significantly improved speech recognition and understanding accuracy. The underlying principles of this model may be applied to a wide range of speech understanding problems including mobile and assistive technologies in which contextual information can be sensed and semantically interpreted to bias processing.
Embodied Language Understanding with a Multiple Timescale Recurrent Neural Network How the human brain understands natural language and what we can learn for intelligent systems is open research. Recently, researchers claimed that language is embodied in most — if not all — sensory and sensorimotor modalities and that the brain's architecture favours the emergence of language. In this paper we investigate the characteristics of such an architecture and propose a model based on the Multiple Timescale Recurrent Neural Network, extended by embodied visual perception. We show that such an architecture can learn the meaning of utterances with respect to visual perception and that it can produce verbal utterances that correctly describe previously unknown scenes.
The hierarchical and functional connectivity of higher-order cognitive mechanisms: neurorobotic model to investigate the stability and flexibility of working memory. Higher-order cognitive mechanisms (HOCM), such as planning, cognitive branching, switching, etc., are known to be the outcomes of a unique neural organizations and dynamics between various regions of the frontal lobe. Although some recent anatomical and neuroimaging studies have shed light on the architecture underlying the formation of such mechanisms, the neural dynamics and the pathways in and between the frontal lobe to form and/or to tune the stability level of its working memory remain controversial. A model to clarify this aspect is therefore required. In this study, we propose a simple neurocomputational model that suggests the basic concept of how HOCM, including the cognitive branching and switching in particular, may mechanistically emerge from time-based neural interactions. The proposed model is constructed such that its functional and structural hierarchy mimics, to a certain degree, the biological hierarchy that is believed to exist between local regions in the frontal lobe. Thus, the hierarchy is attained not only by the force of the layout architecture of the neural connections but also through distinct types of neurons, each with different time properties. To validate the model, cognitive branching and switching tasks were simulated in a physical humanoid robot driven by the model. Results reveal that separation between the lower and the higher-level neurons in such a model is an essential factor to form an appropriate working memory to handle cognitive branching and switching. The analyses of the obtained result also illustrates that the breadth of this separation is important to determine the characteristics of the resulting memory, either static memory or dynamic memory. This work can be considered as a joint research between synthetic and empirical studies, which can open an alternative research area for better understanding of brain mechanisms.
Compressed Sensing with Coherent and Redundant Dictionaries This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an ℓ1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence restriction on the dictionary and our results may be the first of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of ℓ1-analysis for such problems.
Compressed Sensing. Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0<ples1. The N most important coefficients in that expansion allow reconstruction with lscr2 error O(N1/2-1p/). It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients. Moreover, a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing. The nonadaptive measurements have the character of "random" linear combinations of basis/frame elements. Our results use the notions of optimal recovery, of n-widths, and information-based complexity. We estimate the Gel'fand n-widths of lscrp balls in high-dimensional Euclidean space in the case 0<ples1, and give a criterion identifying near- optimal subspaces for Gel'fand n-widths. We show that "most" subspaces are near-optimal, and show that convex optimization (Basis Pursuit) is a near-optimal way to extract information derived from these near-optimal subspaces
An optimal algorithm for approximate nearest neighbor searching fixed dimensions Consider a set of S of n data points in real d-dimensional space, Rd, where distances are measured using any Minkowski metric. In nearest neighbor searching, we preprocess S into a data structure, so that given any query point q ∈ Rd, is the closest point of S to q can be reported quickly. Given any positive real &egr;, data point p is a (1 +&egr;)-approximate nearest neighbor of q if its distance from q is within a factor of (1 + &egr;) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of n points in Rd in O(dn log n) time and O(dn) space, so that given a query point q ∈ Rd, and &egr; 0, a (1 + &egr;)-approximate nearest neighbor of q can be computed in O(cd, &egr; log n) time, where cd,&egr;≤d 1 + 6d/e;d is a factor depending only on dimension and &egr;. In general, we show that given an integer k ≥ 1, (1 + &egr;)-approximations to the k nearest neighbors of q can be computed in additional O(kd log n) time.
Asymptotic Sampling Distribution for Polynomial Chaos Representation from Data: A Maximum Entropy and Fisher Information Approach A procedure is presented for characterizing the asymptotic sampling distribution of estimators of the polynomial chaos (PC) coefficients of a second-order nonstationary and non-Gaussian random process by using a collection of observations. The random process represents a physical quantity of interest, and the observations made over a finite denumerable subset of the indexing set of the random process are considered to form a set of realizations of a random vector $\mathcal{Y}$ representing a finite-dimensional projection of the random process. The Karhunen-Loève decomposition and a scaling transformation are employed to produce a reduced-order model $\mathcal{Z}$ of $\mathcal{Y}$. The PC expansion of $\mathcal{Z}$ is next determined by having recourse to the maximum-entropy principle, the Metropolis-Hastings Markov chain Monte Carlo algorithm, and the Rosenblatt transformation. The resulting PC expansion has random coefficients, where the random characteristics of the PC coefficients can be attributed to the limited data available from the experiment. The estimators of the PC coefficients of $\mathcal{Y}$ obtained from that of $\mathcal{Z}$ are found to be maximum likelihood estimators as well as consistent and asymptotically efficient. Computation of the covariance matrix of the associated asymptotic normal distribution of estimators of the PC coefficients of $\mathcal{Y}$ requires knowledge of the Fisher information matrix (FIM). The FIM is evaluated here by using a numerical integration scheme as well as a sampling technique. The resulting confidence interval on the PC coefficient estimators essentially reflects the effect of incomplete information (due to data limitation) on the characterization of the stochastic process. This asymptotic distribution is significant as its characteristics can be propagated through predictive models for which the stochastic process in question describes uncertainty on some input parameters.
On the Smolyak Cubature Error for Analytic Functions this paper, the author has been informed that Gerstner andGriebel [4] rediscovered this method. For algorithmic details, we refer to theirpaper. The resulting Smolyak cubature formulae are denoted by Q
SOS: The MOS is not enough! When it comes to analysis and interpretation of the results of subjective QoE studies, one often witnesses a lack of attention to the diversity in subjective user ratings. In extreme cases, solely Mean Opinion Scores (MOS) are reported, causing the loss of important information on the user rating diversity. In this paper, we emphasize the importance of considering the Standard deviation of Opinion Scores (SOS) and analyze important characteristics of this measure. As a result, we formulate the SOS hypothesis which postulates a square relationship between the MOS and the SOS. We demonstrate the validity and applicability of the SOS hypothesis for a wide range of studies. The main benefit of the SOS hypothesis is that it allows for a compact, yet still comprehensive statistical summary of subjective user tests. Furthermore, it supports checking the reliability of test result data sets as well as their comparability across different QoE studies.
Fuzzy Logic and the Resolution Principle The relationship between fuzzy logic and two-valued logic in the context of the first order predicate calctflus is discussed. It is proved that if every clause in a set of clauses is somethblg more than a "half-truth" and the most reliable clause has truth-value a and the most unreliable clause has truth-value b, then we are guaranteed that all the logical con- sequences obtained by repeatedly applying the resolution principle will have truth-value between a and b. The significance of this theorem is also discussed.
Machine Understanding of Natural Language
Rapid method to account for process variation in full-chip capacitance extraction Full-chip capacitance extraction programs based on lookup techniques, such as HILEX/CUP , can be enhanced to rigorously account for process variations in the dimensions of very large scale integration interconnect wires with only modest additional computational effort. HILEX/CUP extracts interconnect capacitance from layout using analytical models with reasonable accuracy. These extracted capacitances are strictly valid only for the nominal interconnect dimensions; the networked nature of capacitive relationships in dense, complex interconnect structures precludes simple extrapolations of capacitance with dimensional changes. However, the derivatives, with respect to linewidth variation of the analytical models, can be accumulated along with the capacitance itself for each interacting pair of nodes. A numerically computed derivative with respect to metal and dielectric layer thickness variation can also be accumulated. Each node pair's extracted capacitance and its gradient with respect to linewidth and thickness variation on each metal and dielectric layer can be stored in a file. Thus, instead of storing a scalar value for each extracted capacitance, a vector of 3I+1 values will be stored for capacitance and its gradient, where I is the number of metal layers. Subsequently, this gradient information can be used during circuit simulation in conjunction with any arbitrary vector of interconnect process variations to perform sensitivity analysis of circuit performance.
Construction of interval-valued fuzzy entropy invariant by translations and scalings In this paper, we propose a method to construct interval-valued fuzzy entropies (Burillo and Bustince 1996). This method uses special aggregation functions applied to interval-contrasts. In this way, we are able to construct interval-valued fuzzy entropies from automorphisms and implication operators. Finally, we study the invariance of our constructions by scaling and translation.
1.076132
0.052625
0.052625
0.052625
0.014286
0.000201
0.000011
0.000004
0
0
0
0
0
0
Matchings and transversals in hypergraphs, domination and independence-in trees A family of hypergraphs is exhibited which have the property that the minimum cardinality of a transversal is equal to the maximum cardinality of a matching. A result concerning domination and independence in trees which generalises a recent result of Meir and Moon is deduced.
Domination in intersecting hypergraphs. A matching in a hypergraph H is a set of pairwise disjoint hyperedges. The matching number α′(H) of H is the size of a maximum matching in H. A subset D of vertices of H is a dominating set of H if for every v∈V∖D there exists u∈D such that u andv lie in a hyperedge of H. The cardinality of a minimum dominating set of H is called the domination number of H, denoted by γ(H). It is known that for an intersecting hypergraph H with rank r, γ(H)≤r−1. In this paper we present structural properties on intersecting hypergraphs with rank r satisfying the equality γ(H)=r−1. By applying these properties we show that all linear intersecting hypergraphs H with rank 4 satisfying γ(H)=r−1 can be constructed by the well-known Fano plane.
Linear hypergraphs with large transversal number and maximum degree two For k=2, let H be a k-uniform hypergraph on n vertices and m edges. The transversal number @t(H) of H is the minimum number of vertices that intersect every edge. Chvatal and McDiarmid [V. Chvatal, C. McDiarmid, Small transversals in hypergraphs, Combinatorica 12 (1992) 19-26] proved that @t(H)@?(n+@?k2@?m)/(@?3k2@?). In particular, for k@?{2,3} we have that (k+1)@t(H)@?n+m. A linear hypergraph is one in which every two distinct edges of H intersect in at most one vertex. In this paper, we consider the following question posed by Henning and Yeo: Is it true that if H is linear, then (k+1)@t(H)@?n+m holds for all k=2? If k=4 and we relax the linearity constraint, then this is not always true. We show that if @D(H)@?2, then (k+1)@t(H)@?n+m does hold for all k=2 and we characterize the hypergraphs achieving equality in this bound.
Matching and domination numbers in r-uniform hypergraphs. A matching is a set of pairwise disjoint hyperedges of a hypergraph H. The matching number $$\\nu (H)$$ź(H) of H is the maximum cardinality of a matching. A subset D of vertices of H is called a dominating set of H if for every vertex v not in D there exists $$u\\in D$$uźD such that u and v are contained in a hyperedge of H. The minimum cardinality of a dominating set of H is called the domination number of H and is denoted by $$\\gamma (H)$$ź(H). In this paper we show that every r-uniform hypergraph H satisfies the inequality $$\\gamma (H)\\le (r-1)\\nu (H)$$ź(H)≤(r-1)ź(H) and the bound is sharp.
Equality of domination and transversal numbers in hypergraphs A subset S of the vertex set of a hypergraph H is called a dominating set of H if for every vertex v not in S there exists u@?S such that u and v are contained in an edge in H. The minimum cardinality of a dominating set in H is called the domination number of H and is denoted by @c(H). A transversal of a hypergraph H is defined to be a subset T of the vertex set such that T@?E0@? for every edge E of H. The transversal number of H, denoted by @t(H), is the minimum number of vertices in a transversal. A hypergraph is of rank k if each of its edges contains at most k vertices. The inequality @t(H)=@c(H) is valid for every hypergraph H without isolated vertices. In this paper, we investigate the hypergraphs satisfying @t(H)=@c(H), and prove that their recognition problem is NP-hard already on the class of linear hypergraphs of rank 3, while on unrestricted problem instances it lies inside the complexity class @Q"2^p. Structurally we focus our attention on hypergraphs in which each subhypergraph H^' without isolated vertices fulfills the equality @t(H^')=@c(H^'). We show that if each induced subhypergraph satisfies the equality then it holds for the non-induced ones as well. Moreover, we prove that for every positive integer k, there are only a finite number of forbidden subhypergraphs of rank k, and each of them has domination number at most k.
Small transversals in hypergraphs For each positive integerk, we consider the setAk of all ordered pairs [a, b] such that in everyk-graph withn vertices andm edges some set of at mostam+bn vertices meets all the edges. We show that eachAk withk=2 has infinitely many extreme points and conjecture that, for every positive e, it has only finitely many extreme points [a, b] witha=e. With the extreme points ordered by the first coordinate, we identify the last two extreme points of everyAk, identify the last three extreme points ofA3, and describeA2 completely. A by-product of our arguments is a new algorithmic proof of Turán's theorem.
Independent systems of representatives in weighted graphs The following conjecture may have never been explicitly stated, but seems to have been floating around: if the vertex set of a graph with maximal degree Δ is partitioned into sets V i of size 2Δ, then there exists a coloring of the graph by 2Δ colors, where each color class meets each V i at precisely one vertex. We shall name it the strong 2Δ-colorability conjecture. We prove a fractional version of this conjecture. For this purpose, we prove a weighted generalization of a theorem of Haxell, on independent systems of representatives (ISR’s). En route, we give a survey of some recent developments in the theory of ISR’s.
Learning and classification of monotonic ordinal concepts
Proactive secret sharing or: How to cope with perpetual leakage Secret sharing schemes protect secrets by distributing them over different locations (share holders). In particular, in k out of n threshold schemes, security is assured if throughout the entire life-time of the secret the adversary is restricted to compromise less than k of the n locations. For long-lived and sensitive secrets this protection may be insufficient. We propose an efficient proactive secret sharing scheme, where shares are periodically renewed (without changing the secret) in such it way that information gained by the adversary in one time period is useless for attacking the secret after the shares are renewed. Hence, the adversary willing to learn the secret needs to break to all k locations during the same time period (e.g., one day, a week, etc.). Furthermore, in order to guarantee the availability and integrity of the secret, we provide mechanisms to detect maliciously (or accidentally) corrupted shares, as well as mechanisms to secretly recover the correct shares when modification is detected.
Criticality computation in parameterized statistical timing Chips manufactured in 90 nm technology have shown large parametric variations, and a worsening trend is predicted. These parametric variations make circuit optimization difficult since different paths are frequency-limiting in different parts of the multi-dimensional process space. Therefore, it is desirable to have a new diagnostic metric for robust circuit optimization. This paper presents a novel algorithm to compute the criticality probability of every edge in the timing graph of a design with linear complexity in the circuit size. Using industrial benchmarks, we verify the correctness of our criticality computation via Monte Carlo simulation. We also show that for large industrial designs with 442,000 gates, our algorithm computes all edge criticalities in less than 160 seconds
Mono-multi bipartite Ramsey numbers, designs, and matrices Eroh and Oellermann defined BRR(G1, G2) as the smallest N such that any edge coloring of the complete bipartite graph KN, N contains either a monochromatic G1 or a multicolored G2. We restate the problem of determining BRR(K1,λ, Kr,s) in matrix form and prove estimates and exact values for several choices of the parameters. Our general bound uses Füredi's result on fractional matchings of uniform hypergraphs and we show that it is sharp if certain block designs exist. We obtain two sharp results for the case r = s = 2: we prove BRR(K1,λ, K2,2) = 3λ - 2 and that the smallest n for which any edge coloring of Kλ,n contains either a monochromatic K1,λ or a multicolored K2,2 is λ2.
Hierarchical statistical characterization of mixed-signal circuits using behavioral modeling A methodology for hierarchical statistical circuit characterization which does not rely upon circuit-level Monte Carlo simulation is presented. The methodology uses principal component analysis, response surface methodology, and statistics to directly calculate the statistical distributions of higher-level parameters from the distributions of lower-level parameters. We have used the methodology to characterize a folded cascode operational amplifier and a phase-locked loop. This methodology permits the statistical characterization of large analog and mixed-signal systems, many of which are extremely time-consuming or impossible to characterize using existing methods.
Spectral Methods for Parameterized Matrix Equations. We apply polynomial approximation methods-known in the numerical PDEs context as spectral methods-to approximate the vector-valued function that satisfies a linear system of equations where the matrix and the right-hand side depend on a parameter. We derive both an interpolatory pseudospectral method and a residual-minimizing Galerkin method, and we show how each can be interpreted as solving a truncated infinite system of equations; the difference between the two methods lies in where the truncation occurs. Using classical theory, we derive asymptotic error estimates related to the region of analyticity of the solution, and we present a practical residual error estimate. We verify the results with two numerical examples.
On Fuzziness, Its Homeland and Its Neighbour
1.072767
0.066667
0.051081
0.051081
0.031688
0.006789
0.000025
0
0
0
0
0
0
0
Sparsity preserving projections with applications to face recognition Dimensionality reduction methods (DRs) have commonly been used as a principled way to understand the high-dimensional data such as face images. In this paper, we propose a new unsupervised DR method called sparsity preserving projections (SPP). Unlike many existing techniques such as local preserving projection (LPP) and neighborhood preserving embedding (NPE), where local neighborhood information is preserved during the DR procedure, SPP aims to preserve the sparse reconstructive relationship of the data, which is achieved by minimizing a L1 regularization-related objective function. The obtained projections are invariant to rotations, rescalings and translations of the data, and more importantly, they contain natural discriminating information even if no class labels are provided. Moreover, SPP chooses its neighborhood automatically and hence can be more conveniently used in practice compared to LPP and NPE. The feasibility and effectiveness of the proposed method is verified on three popular face databases (Yale, AR and Extended Yale B) with promising results.
Beyond sparsity: The role of L1-optimizer in pattern classification The newly-emerging sparse representation-based classifier (SRC) shows great potential for pattern classification but lacks theoretical justification. This paper gives an insight into SRC and seeks reasonable supports for its effectiveness. SRC uses L"1-optimizer instead of L"0-optimizer on account of computational convenience and efficiency. We re-examine the role of L"1-optimizer and find that for pattern recognition tasks, L"1-optimizer provides more classification meaningful information than L"0-optimizer does. L"0-optimizer can achieve sparsity only, whereas L"1-optimizer can achieve closeness as well as sparsity. Sparsity determines a small number of nonzero representation coefficients, while closeness makes the nonzero representation coefficients concentrate on the training samples with the same class label as the given test sample. Thus, it is closeness that guarantees the effectiveness of the L"1-optimizer based SRC. Based on the closeness prior, we further propose two kinds of class L"1-optimizer classifiers (CL"1C), the closeness rule based CL"1C (C-CL"1C) and its improved version: the Lasso rule based CL"1C (L-CL"1C). The proposed classifiers are evaluated on five databases and the experimental results demonstrate advantages of the proposed classifiers over SRC in classification performance and computational efficiency for large sample size problems.
Convergence of Fixed-Point Continuation Algorithms for Matrix Rank Minimization The matrix rank minimization problem has applications in many fields, such as system identification, optimal control, low-dimensional embedding, etc. As this problem is NP-hard in general, its convex relaxation, the nuclear norm minimization problem, is often solved instead. Recently, Ma, Goldfarb and Chen proposed a fixed-point continuation algorithm for solving the nuclear norm minimization problem (Math. Program., doi: 10.1007/s10107-009-0306-5, 2009). By incorporating an approximate singular value decomposition technique in this algorithm, the solution to the matrix rank minimization problem is usually obtained. In this paper, we study the convergence/recoverability properties of the fixed-point continuation algorithm and its variants for matrix rank minimization. Heuristics for determining the rank of the matrix when its true rank is not known are also proposed. Some of these algorithms are closely related to greedy algorithms in compressed sensing. Numerical results for these algorithms for solving affinely constrained matrix rank minimization problems are reported.
Sparse Representation for Computer Vision and Pattern Recognition Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact high-fidelity representation of the observed signal, but also to extract semantic information. The choice of dictionary plays a key role in bridging this gap: unconventional dictionaries consisting of, or learne...
Quantization of Sparse Representations Compressive sensing (CS) is a new signal acquisition technique for sparse and com- pressible signals. Rather than uniformly sampling the signal, CS computes inner products with randomized basis functions; the signal is then recovered by a convex optimization. Random CS measurements are universal in the sense that the same acquisition system is sufficient for signals sparse in any representation. This paper examines the quantization of strictly sparse, power-limited signals and concludes that CS with scalar quantization uses its allocated rate inefficiently.
Subspace pursuit for compressive sensing signal reconstruction We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques when applied to very sparse signals, and reconstruction accuracy of the same order as that of linear programming (LP) optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean-squared error of the reconstruction is upper-bounded by constant multiples of the measurement and signal perturbation energies.
Sparse representation for color image restoration. Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper.
Sets with type-2 operations The algebra of truth values of type-2 fuzzy sets consists of all mappings of the unit interval to itself, with type-2 operations that are convolutions of ordinary max and min operations. This paper is concerned with a special subalgebra of this truth value algebra, namely the set of nonzero functions with values in the two-element set {0,1}. This algebra can be identified with the set of all non-empty subsets of the unit interval, but the operations are not the usual union and intersection. We give simplified descriptions of the operations and derive the basic algebraic properties of this algebra, including the identification of its automorphism group. We also discuss some subalgebras and homomorphisms between them and look briefly at t-norms on this algebra of sets.
Fuzzy connection admission control for ATM networks based on possibility distribution of cell loss ratio This paper proposes a connection admission control (CAC) method for asynchronous transfer mode (ATM) networks based on the possibility distribution of cell loss ratio (CLR). The possibility distribution is estimated in a fuzzy inference scheme by using observed data of the CLR. This method makes possible secure CAC, thereby guaranteeing the allowed CLR. First, a fuzzy inference method is proposed, based on a weighted average of fuzzy sets, in order to estimate the possibility distribution of the CLR. In contrast to conventional methods, the proposed inference method can avoid estimating excessively large values of the CLR. Second, the learning algorithm is considered for tuning fuzzy rules for inference. In this, energy functions are derived so as to efficiently achieve higher multiplexing gain by applying them to CAC. Because the upper bound of the CLR can easily be obtained from the possibility distribution by using this algorithm, CAC can be performed guaranteeing the allowed CLR. The simulation studies show that the proposed method can well extract the upper bound of the CLR from the observed data. The proposed method also makes possible self-compensation in real time for the case where the estimated CLR is smaller than the observed CLR. It preserves the guarantee of the CLR as much as possible in operation of ATM switches. Third, a CAC method which uses the fuzzy inference mentioned above is proposed. In the area with no observed CLR data, fuzzy rules are automatically generated from the fuzzy rules already tuned by the learning algorithm with the existing observed CLR data. Such areas exist because of the absence of experience in connections. This method can guarantee the allowed CLR in the CAC and attains a high multiplex gain as is possible. The simulation studies show its feasibility. Finally, this paper concludes with some brief discussions
Incremental criticality and yield gradients Criticality and yield gradients are two crucial diagnostic metrics obtained from Statistical Static Timing Analysis (SSTA). They provide valuable information to guide timing optimization and timing-driven physical synthesis. Existing work in the literature, however, computes both metrics in a non-incremental manner, i.e., after one or more changes are made in a previously-timed circuit, both metrics need to be recomputed from scratch, which is obviously undesirable for optimizing large circuits. The major contribution of this paper is to propose two novel techniques to compute both criticality and yield gradients efficiently and incrementally. In addition, while node and edge criticalities are addressed in the literature, this paper for the first time describes a technique to compute path criticalities. To further improve algorithmic efficiency, this paper also proposes a novel technique to update "chip slack" incrementally. Numerical results show our methods to be over two orders of magnitude faster than previous work.
Fast image recovery using variable splitting and constrained optimization We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.
Induced uncertain linguistic OWA operators applied to group decision making The ordered weighted averaging (OWA) operator was developed by Yager [IEEE Trans. Syst., Man, Cybernet. 18 (1998) 183]. Later, Yager and Filev [IEEE Trans. Syst., Man, Cybernet.--Part B 29 (1999) 141] introduced a more general class of OWA operators called the induced ordered weighted averaging (IOWA) operators, which take as their argument pairs, called OWA pairs, in which one component is used to induce an ordering over the second components which are exact numerical values and then aggregated. The aim of this paper is to develop some induced uncertain linguistic OWA (IULOWA) operators, in which the second components are uncertain linguistic variables. Some desirable properties of the IULOWA operators are studied, and then, the IULOWA operators are applied to group decision making with uncertain linguistic information.
New Type-2 Rule Ranking Indices for Designing Parsimonious Interval Type-2 Fuzzy Logic Systems In this paper, we propose two novel indices for type-2 fuzzy rule ranking to identify the most influential fuzzy rules in designing type-2 fuzzy logic systems, and name them as R-values and c-values of fuzzy rules separately. The R-values of type-2 fuzzy rules are obtained by applying QR decomposition in which there is no need to estimate a rank as required in the SVD-QR with column pivoting algorithm. The c-values of type-2 fuzzy rules are suggested to rank rules based on the effects of rule consequents. Experimental results on a signal recovery problem have shown that by using the proposed indices the most influential type-2 fuzzy rules can be effectively selected to construct parsimonious type-2 fuzzy models while the system performances are kept at a satisfied level.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.025455
0.018182
0.009091
0.006727
0.002273
0.001119
0.000019
0
0
0
0
0
0
0
Adaptive learning of linguistic hierarchy in a multiple timescale recurrent neural network Recent research has revealed that hierarchical linguistic structures can emerge in a recurrent neural network with a sufficient number of delayed context layers. As a representative of this type of network the Multiple Timescale Recurrent Neural Network (MTRNN) has been proposed for recognising and generating known as well as unknown linguistic utterances. However the training of utterances performed in other approaches demands a high training effort. In this paper we propose a robust mechanism for adaptive learning rates and internal states to speed up the training process substantially. In addition we compare the generalisation of the network for the adaptive mechanism as well as the standard fixed learning rates finding at least equal capabilities.
Towards situated speech understanding: visual context priming of language models Fuse is a situated spoken language understanding system that uses visual context to steer the interpretation of speech. Given a visual scene and a spoken description, the system finds the object in the scene that best fits the meaning of the description. To solve this task, Fuse performs speech recognition and visually-grounded language understanding. Rather than treat these two problems separately, knowledge of the visual semantics of language and the specific contents of the visual scene are fused during speech processing. As a result, the system anticipates various ways a person might describe any object in the scene, and uses these predictions to bias the speech recognizer towards likely sequences of words. A dynamic visual attention mechanism is used to focus processing on likely objects within the scene as spoken utterances are processed. Visual attention and language prediction reinforce one another and converge on interpretations of incoming speech signals which are most consistent with visual context. In evaluations, the introduction of visual context into the speech recognition process results in significantly improved speech recognition and understanding accuracy. The underlying principles of this model may be applied to a wide range of speech understanding problems including mobile and assistive technologies in which contextual information can be sensed and semantically interpreted to bias processing.
Embodied Language Understanding with a Multiple Timescale Recurrent Neural Network How the human brain understands natural language and what we can learn for intelligent systems is open research. Recently, researchers claimed that language is embodied in most — if not all — sensory and sensorimotor modalities and that the brain's architecture favours the emergence of language. In this paper we investigate the characteristics of such an architecture and propose a model based on the Multiple Timescale Recurrent Neural Network, extended by embodied visual perception. We show that such an architecture can learn the meaning of utterances with respect to visual perception and that it can produce verbal utterances that correctly describe previously unknown scenes.
The hierarchical and functional connectivity of higher-order cognitive mechanisms: neurorobotic model to investigate the stability and flexibility of working memory. Higher-order cognitive mechanisms (HOCM), such as planning, cognitive branching, switching, etc., are known to be the outcomes of a unique neural organizations and dynamics between various regions of the frontal lobe. Although some recent anatomical and neuroimaging studies have shed light on the architecture underlying the formation of such mechanisms, the neural dynamics and the pathways in and between the frontal lobe to form and/or to tune the stability level of its working memory remain controversial. A model to clarify this aspect is therefore required. In this study, we propose a simple neurocomputational model that suggests the basic concept of how HOCM, including the cognitive branching and switching in particular, may mechanistically emerge from time-based neural interactions. The proposed model is constructed such that its functional and structural hierarchy mimics, to a certain degree, the biological hierarchy that is believed to exist between local regions in the frontal lobe. Thus, the hierarchy is attained not only by the force of the layout architecture of the neural connections but also through distinct types of neurons, each with different time properties. To validate the model, cognitive branching and switching tasks were simulated in a physical humanoid robot driven by the model. Results reveal that separation between the lower and the higher-level neurons in such a model is an essential factor to form an appropriate working memory to handle cognitive branching and switching. The analyses of the obtained result also illustrates that the breadth of this separation is important to determine the characteristics of the resulting memory, either static memory or dynamic memory. This work can be considered as a joint research between synthetic and empirical studies, which can open an alternative research area for better understanding of brain mechanisms.
Exploring Strategies for Training Deep Neural Networks Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization often appears to get stuck in poor solutions. Hinton et al. recently proposed a greedy layer-wise unsupervised learning procedure relying on the training algorithm of restricted Boltzmann machines (RBM) to initialize the parameters of a deep belief network (DBN), a generative model with many layers of hidden causal variables. This was followed by the proposal of another greedy layer-wise procedure, relying on the usage of autoassociator networks. In the context of the above optimization problem, we study these algorithms empirically to better understand their success. Our experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy helps the optimization by initializing weights in a region near a good local minimum, but also implicitly acts as a sort of regularization that brings better generalization and encourages internal distributed representations that are high-level abstractions of the input. We also present a series of experiments aimed at evaluating the link between the performance of deep neural networks and practical aspects of their topology, for example, demonstrating cases where the addition of more depth helps. Finally, we empirically explore simple variants of these training algorithms, such as the use of different RBM input unit distributions, a simple way of combining gradient estimators to improve performance, as well as on-line versions of those algorithms.
Anaphora for everyone: pronominal anaphora resoluation without a parser We present an algorithm for anaphora resolution which is a modified and extended version of that developed by (Lappin and Leass, 1994). In contrast to that work, our algorithm does not require in-depth, full, syntactic parsing of text. Instead, with minimal compromise in output quality, the modifications enable the resolution process to work from the output of a part of speech tagger, enriched only with annotations of grammatical function of lexical items in the input text stream. Evaluation of the results of our implementation demonstrates that accurate anaphora resolution can be realized within natural language processing frameworks which do not---or cannot--- employ robust and reliable parsing components.
Counter braids: a novel counter architecture for per-flow measurement Fine-grained network measurement requires routers and switches to update large arrays of counters at very high link speed (e.g. 40 Gbps). A naive algorithm needs an infeasible amount of SRAM to store both the counters and a flow-to-counter association rule, so that arriving packets can update corresponding counters at link speed. This has made accurate per-flow measurement complex and expensive, and motivated approximate methods that detect and measure only the large flows. This paper revisits the problem of accurate per-flow measurement. We present a counter architecture, called Counter Braids, inspired by sparse random graph codes. In a nutshell, Counter Braids "compresses while counting". It solves the central problems (counter space and flow-to-counter association) of per-flow measurement by "braiding" a hierarchy of counters with random graphs. Braiding results in drastic space reduction by sharing counters among flows; and using random graphs generated on-the-fly with hash functions avoids the storage of flow-to-counter association. The Counter Braids architecture is optimal (albeit with a complex decoder) as it achieves the maximum compression rate asymptotically. For implementation, we present a low-complexity message passing decoding algorithm, which can recover flow sizes with essentially zero error. Evaluation on Internet traces demonstrates that almost all flow sizes are recovered exactly with only a few bits of counter space per flow.
MapReduce: simplified data processing on large clusters MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.
Efficient approximation of random fields for numerical applications This article is dedicated to the rapid computation of separable expansions for the approximation of random fields. We consider approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. Especially, we provide an a posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples are provided to validate and quantify the presented methods. Copyright (c) 2015 John Wiley & Sons, Ltd.
Recognition of shapes by attributed skeletal graphs In this paper, we propose a framework to address the problem of generic 2-D shape recognition. The aim is mainly on using the potential strength of skeleton of discrete objects in computer vision and pattern recognition where features of objects are needed for classification. We propose to represent the medial axis characteristic points as an attributed skeletal graph to model the shape. The information about the object shape and its topology is totally embedded in them and this allows the comparison of different objects by graph matching algorithms. The experimental results demonstrate the correctness in detecting its characteristic points and in computing a more regular and effective representation for a perceptual indexing. The matching process, based on a revised graduated assignment algorithm, has produced encouraging results, showing the potential of the developed method in a variety of computer vision and pattern recognition domains. The results demonstrate its robustness in the presence of scale, reflection and rotation transformations and prove the ability to handle noise and occlusions.
Compressive sampling for streaming signals with sparse frequency content Compressive sampling (CS) has emerged as significant signal processing framework to acquire and reconstruct sparse signals at rates significantly below the Nyquist rate. However, most of the CS development to-date has focused on finite-length signals and representations. In this paper we discuss a streaming CS framework and greedy reconstruction algorithm, the Stream- ing Greedy Pursuit (SGP), to reconstruct signals with sparse frequency content. Our proposed sampling framework and the SGP are explicitly intended for streaming applications and signals of unknown length. The measurement framework we propose is designed to be causal and im- plementable using existing hardware architectures. Furthermore, our reconstruction algorithm provides specific computational guarantees, which makes it appropriate for real-time system im- plementations. Our experiment results on very long signals demonstrate the good performance of the SGP and validate our approach.
QoE Aware Service Delivery in Distributed Environment Service delivery and customer satisfaction are strongly related items for a correct commercial management platform. Technical aspects targeting this issue relate to QoS parameters that can be handled by the platform, at least partially. Subjective psychological issues and human cognitive aspects are typically unconsidered aspects and they directly determine the Quality of Experience (QoE). These factors finally have to be considered as key input for a successful business operation between a customer and a company. In our work, a multi-disciplinary approach is taken to propose a QoE interaction model based on the theoretical results from various fields including pyschology, cognitive sciences, sociology, service ecosystem and information technology. In this paper a QoE evaluator is described for assessing the service delivery in a distributed and integrated environment on per user and per service basis.
A model to perform knowledge-based temporal abstraction over multiple signals In this paper we propose the Multivariable Fuzzy Temporal Profile model (MFTP), which enables the projection of expert knowledge on a physical system over a computable description. This description may be used to perform automatic abstraction on a set of parameters that represent the temporal evolution of the system. This model is based on the constraint satisfaction problem (CSP)formalism, which enables an explicit representation of the knowledge, and on fuzzy set theory, from which it inherits the ability to model the imprecision and uncertainty that are characteristic of human knowledge vagueness. We also present an application of the MFTP model to the recognition of landmarks in mobile robotics, specifically to the detection of doors on ultrasound sensor signals from a Nomad 200 robot.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.072622
0.070333
0.070333
0.070333
0.017584
0
0
0
0
0
0
0
0
0
The Inherent Indistinguishability in Fuzzy Systems This paper provides an overview of fuzzy systems from the viewpoint of similarity relations. Similarity relations turn out to be an appealing framework in which typical concepts and techniques applied in fuzzy systems and fuzzy control can be better understood and interpreted. They can also be used to describe the indistinguishability inherent in any fuzzy system that cannot be avoided.
Fuzzy homomorphisms of algebras In this paper we consider fuzzy relations compatible with algebraic operations, which are called fuzzy relational morphisms. In particular, we aim our attention to those fuzzy relational morphisms which are uniform fuzzy relations, called uniform fuzzy relational morphisms, and those which are partially uniform F-functions, called fuzzy homomorphisms. Both uniform fuzzy relations and partially uniform F-functions were introduced in a recent paper by us. Uniform fuzzy relational morphisms are especially interesting because they can be conceived as fuzzy congruences which relate elements of two possibly different algebras. We give various characterizations and constructions of uniform fuzzy relational morphisms and fuzzy homomorphisms, we establish certain relationships between them and fuzzy congruences, and we prove homomorphism and isomorphism theorems concerning them. We also point to some applications of uniform fuzzy relational morphisms.
Fuzzy modifiers based on fuzzy relations In this paper we introduce a new type of fuzzy modifiers (i.e. mappings that transform a fuzzy set into a modified fuzzy set) based on fuzzy relations. We show how they can be applied for the representation of weakening adverbs (more or less, roughly) and intensifying adverbs (very, extremely) in the inclusive and the non-inclusive interpretation. We illustrate their use in an approximate reasoning scheme.
Towards a Logic for a Fuzzy Logic Controller Without Abstract
Similarity relations and fuzzy orderings. The notion of ''similarity'' as defined in this paper is essentially a generalization of the notion of equivalence. In the same vein, a fuzzy ordering is a generalization of the concept of ordering. For example, the relation x @? y (x is much larger than y) is a fuzzy linear ordering in the set of real numbers. More concretely, a similarity relation, S, is a fuzzy relation which is reflexive, symmetric, and transitive. Thus, let x, y be elements of a set X and @m"s(x,y) denote the grade of membership of the ordered pair (x,y) in S. Then S is a similarity relation in X if and only if, for all x, y, z in X, @m"s(x,x) = 1 (reflexivity), @m"s(x,y) = @m"s(y,x) (symmetry), and @m"s(x,z) = @? (@m"s(x,y) A @m"s(y,z)) (transitivity), where @? and A denote max and min, respectively. ^y A fuzzy ordering is a fuzzy relation which is transitive. In particular, a fuzzy partial ordering, P, is a fuzzy ordering which is reflexive and antisymmetric, that is, (@m"P(x,y) 0 and x y) @? @m"P(y,x) = 0. A fuzzy linear ordering is a fuzzy partial ordering in which x y @? @m"s(x,y) 0 or @m"s(y,x) 0. A fuzzy preordering is a fuzzy ordering which is reflexive. A fuzzy weak ordering is a fuzzy preordering in which x y @? @m"s(x,y) 0 or @m"s(y,x) 0. Various properties of similarity relations and fuzzy orderings are investigated and, as an illustration, an extended version of Szpilrajn's theorem is proved.
Artificial Paranoia
Processing fuzzy temporal knowledge L.A. Zadeh's (1975) possibility theory is used as a general framework for modeling temporal knowledge pervaded with imprecision or uncertainty. Ill-known dates, time intervals with fuzzy boundaries, fuzzy durations, and uncertain precedence relations between events can be dealt with in this approach. An explicit representation (in terms of possibility distributions) of the available information, which may be neither precise nor certain, is maintained. Deductive patterns of reasoning involving fuzzy and/or uncertain temporal knowledge are established, and the combination of fuzzy partial pieces of information is considered. A scheduled example with fuzzy temporal windows is discussed
On the capacity of MIMO broadcast channels with partial side information In multiple-antenna broadcast channels, unlike point-to-point multiple-antenna channels, the multiuser capacity depends heavily on whether the transmitter knows the channel coefficients to each user. For instance, in a Gaussian broadcast channel with M transmit antennas and n single-antenna users, the sum rate capacity scales like Mloglogn for large n if perfect channel state information (CSI) is available at the transmitter, yet only logarithmically with M if it is not. In systems with large n, obtaining full CSI from all users may not be feasible. Since lack of CSI does not lead to multiuser gains, it is therefore of interest to investigate transmission schemes that employ only partial CSI. We propose a scheme that constructs M random beams and that transmits information to the users with the highest signal-to-noise-plus-interference ratios (SINRs), which can be made available to the transmitter with very little feedback. For fixed M and n increasing, the throughput of our scheme scales as MloglognN, where N is the number of receive antennas of each user. This is precisely the same scaling obtained with perfect CSI using dirty paper coding. We furthermore show that a linear increase in throughput with M can be obtained provided that M does not not grow faster than logn. We also study the fairness of our scheduling in a heterogeneous network and show that, when M is large enough, the system becomes interference dominated and the probability of transmitting to any user converges to 1/n, irrespective of its path loss. In fact, using M=αlogn transmit antennas emerges as a desirable operating point, both in terms of providing linear scaling of the throughput with M as well as in guaranteeing fairness.
A 2-tuple fuzzy linguistic representation model for computing with words The fuzzy linguistic approach has been applied successfully to many problems. However, there is a limitation of this approach imposed by its information representation model and the computation methods used when fusion processes are performed on linguistic values. This limitation is the loss of information; this loss of information implies a lack of precision in the final results from the fusion of linguistic information. In this paper, we present tools for overcoming this limitation. The linguistic information is expressed by means of 2-tuples, which are composed of a linguistic term and a numeric value assessed in (-0.5, 0.5). This model allows a continuous representation of the linguistic information on its domain, therefore, it can represent any counting of information obtained in a aggregation process. We then develop a computational technique for computing with words without any loss of information. Finally, different classical aggregation operators are extended to deal with the 2-tuple linguistic model
Completeness and consistency conditions for learning fuzzy rules The completeness and consistency conditions were introduced in order to achieve acceptable concept recognition rules. In real problems, we can handle noise-affected examples and it is not always possible to maintain both conditions. Moreover, when we use fuzzy information there is a partial matching between examples and rules, therefore the consistency condition becomes a matter of degree. In this paper, a learning algorithm based on soft consistency and completeness conditions is proposed. This learning algorithm combines in a single process rule and feature selection and it is tested on different databases. (C) 1998 Elsevier Science B.V. All rights reserved.
Estimation of (near) low-rank matrices with noise and high-dimensional scaling We study an instance of high-dimensional inference in which the goal is to estimate a matrix circle minus* is an element of R-m1xm2 on the basis of N noisy observations. The unknown matrix circle minus* is assumed to be either exactly low rank, or "near" low-rank, meaning that it can be well-approximated by a matrix with low rank. We consider a standard M-estimator based on regularization by the nuclear or trace norm over matrices, and analyze its performance under high-dimensional scaling. We define the notion of restricted strong convexity (RSC) for the loss function, and use it to derive nonasymptotic bounds on the Frobenius norm error that hold for a general class of noisy observation models, and apply to both exactly low-rank and approximately low rank matrices. We then illustrate consequences of this general theory for a number of specific matrix models, including low-rank multivariate or multi-task regression, system identification in vector autoregressive processes and recovery of low-rank matrices from random projections. These results involve nonasymptotic random matrix theory to establish that the RSC condition holds, and to determine an appropriate choice of regularization parameter. Simulation results show excellent agreement with the high-dimensional scaling of the error predicted by our theory.
User impatience and network performance In this work, we analyze from passive measurements the correlations between the user-induced interruptions of TCP connections and different end-to-end performance metrics. The aim of this study is to assess the possibility for a network operator to take into account the customers' experience for network monitoring. We first observe that the usual connection-level performance metrics of the interrupted connections are not very different, and sometimes better than those of normal connections. However, the request-level performance metrics show stronger correlations between the interruption rates and the network quality-of-service. Furthermore, we show that the user impatience could also be used to characterize the relative sensitivity of data applications to various network performance metrics.
Fuzzy concepts and formal methods: some illustrative examples It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However, some system problems, particularly those drawn from the information systems (IS) problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper identifies the problem types of interest and argues that they are characterised by uncertainty and imprecision. It suggests fuzzy set theory as a useful formalism for modelling aspects of this imprecision. The paper illustrates how a fuzzy logic toolkit for Z can be applied to such problem domains. Several examples are presented illustrating the representation of imprecise concepts as fuzzy sets and relations, and soft pre-conditions and system requirements as a series of linguistically quantified propositions.
The laws of large numbers for fuzzy random variables The new attempt of weak and strong law of large numbers for fuzzy random variables is discussed in this paper by proposing the convergence in probability and convergence with probability one for fuzzy random variables. We first consider the limit properties of fuzzy numbers by invoking the Hausdorff metric, and then we extend it to the convergence in probability and convergence with probability one for fuzzy random variables. We provide the notion of weak and strong convergence in probability and weak and strong convergence with probability one for fuzzy random variables. Finally we come up with the weak and strong law of large numbers for fuzzy random variables in weak and strong sense. (C) 2000 Elsevier Science B.V. All rights reserved.
1.249984
0.249984
0.049997
0.001999
0.000027
0.000001
0
0
0
0
0
0
0
0
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
4
Edit dataset card